examples
Deep Learning Examples (by LambdaLabsML)
examples | stable-diffusion | |
---|---|---|
12 | 20 | |
789 | 1,023 | |
1.0% | - | |
7.2 | 0.0 | |
4 months ago | about 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
examples
Posts with mentions or reviews of examples.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-11.
-
SD 1.4: Switching Clip with a new encoder
Hello everyone, I am trying to fine tune a Stable Diffusion 1.4 model to work on specific images which requires specific descriptions. I am following this github, which is a fork of the original one, I have12000 images and I am at the 20th epoch with a 0.199 loss: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
-
Custom model training question
It seems there are two ways: 1) Use Dreambooth technique (joepenna, Shivam's, lastben repos) 2) Train on top of original stable-diffusion model (as described for example here https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning, on XavierXiao repo)
-
Differences between a hypernetwork, embedding and Dreambooth models?
If you want to make or customize a model: - there's fine-tuning a model (not Dreambooth). You're essentially continuing the training process that the SD authors used. It requires professional-grade AI hardware and takes a while. People seem to not even know this exists. You start with some base model (usually plain SD, but it could be any model) and fine-tune it. You should assume the process of fine-tuning it will make it unsuitable for anything else -- for instance, if you tune on one person's face, expect it to never generate anyone else's face, and if you find-tune on one art style, any other art style may suck. - Dreambooth is a different method for fine-tuning a model, needing a fraction of the power and time "real" fine-tuning does. But it still takes a lot of power, the most optimized Dreambooth tools take 12GB of VRAM and most graphics cards don't even have that. - There's several competitors to the Dreambooth method, such as EveryDream, which claim better results and sometimes claim only needing one photo. I'm not sure how things have really played out, especially since you can't tell the difference between "this method sucks" and "this method is great but everyone is using it wrong". - Hypernetworks take less time and power than DB. some testify it's better for style training than DreamBooth - TI's take the least time and power, I recently saw a training method for 6 GB VRAM cards. - Aesthetic gradients don't need training! :)
- Can't clone from Huggingface?
- Huggingface cloning not working, more info inside
-
Was told to crosspost here. My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images. Questions and advice welcome!
I BLIP captioned the images to try and retrain using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb. I used the BLIP captions, and then put "D&D character {race}" in front, where race was the manually annotated race I did. After that, for dreambooth (followed this roughly: https://www.youtube.com/watch?v=7bVZDeGPv6I) You don't need to rename the images, just put them in the same folder, which you specify in a json file that Dreambooth reads to know how to handle each class
-
My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images
Trained a Dreambooth model from v1.5 checkpoint. I tried finetuning the model using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb, but I didn't achieve results I liked
-
How To Fine Tune Stable Diffusion: Naruto Character Edition
Thank you! This model training did not use dreambooth. Here is the reference repo I used, it is based on the original training repo for stable diffusion. Dreambooth is a more sophisticated framework and I am very interested in doing a side-by-side comparison against this model as a follow-up.
- [D] DreamBooth Stable Diffusion training now possible in 24GB GPUs, and it runs about 2 times faster.
-
[P] Stable Diffusion finetuned on Pokemon!
Code and details: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
stable-diffusion
Posts with mentions or reviews of stable-diffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-03.
-
How To Fine Tune Stable Diffusion: Naruto Character Edition
Thank you! This model training did not use dreambooth. Here is the reference repo I used, it is based on the original training repo for stable diffusion. Dreambooth is a more sophisticated framework and I am very interested in doing a side-by-side comparison against this model as a follow-up.
- Can we sell art created in SD?
-
New MidJourney Beta is using Stable Difussion under the hood
Although I've just learned there is an actual open source release in a different repository, just not in the stable difussion repo: https://github.com/pesser/stable-diffusion
- Is StableDiffusion really available?
-
Open-source rival for OpenAI's DALL-E runs on your graphics card
Thanks :). Do you remember where you got the info that a language model is being trained? Did it come from here?
- GitHub - pesser/stable-diffusion
-
How to access the Stable Diffusion software?
All I could find online was the github: https://github.com/pesser/stable-diffusion
- Who needs an invite to stable diffusion
- I missed Pride this year so I made this instead
-
Emma Watson potraits ✨️ | Stable Diffusion
For example, did you use this one? https://github.com/pesser/stable-diffusion Or did you use this one? https://pypi.org/project/stable-diffusion/
What are some alternatives?
When comparing examples and stable-diffusion you can also consider the following projects:
stable-diffusion-webui - Stable Diffusion web UI
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
artbot-for-stable-diffusion - A front-end GUI for interacting with the AI Horde / Stable Diffusion distributed cluster
stable-diffusion - A latent text-to-image diffusion model
dalle-mini - DALL·E Mini - Generate images from a text prompt