examples
artbot-for-stable-diffusion
examples | artbot-for-stable-diffusion | |
---|---|---|
12 | 85 | |
789 | 159 | |
1.0% | - | |
7.2 | 9.4 | |
4 months ago | about 2 months ago | |
Jupyter Notebook | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
examples
-
SD 1.4: Switching Clip with a new encoder
Hello everyone, I am trying to fine tune a Stable Diffusion 1.4 model to work on specific images which requires specific descriptions. I am following this github, which is a fork of the original one, I have12000 images and I am at the 20th epoch with a 0.199 loss: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
-
Custom model training question
It seems there are two ways: 1) Use Dreambooth technique (joepenna, Shivam's, lastben repos) 2) Train on top of original stable-diffusion model (as described for example here https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning, on XavierXiao repo)
-
Differences between a hypernetwork, embedding and Dreambooth models?
If you want to make or customize a model: - there's fine-tuning a model (not Dreambooth). You're essentially continuing the training process that the SD authors used. It requires professional-grade AI hardware and takes a while. People seem to not even know this exists. You start with some base model (usually plain SD, but it could be any model) and fine-tune it. You should assume the process of fine-tuning it will make it unsuitable for anything else -- for instance, if you tune on one person's face, expect it to never generate anyone else's face, and if you find-tune on one art style, any other art style may suck. - Dreambooth is a different method for fine-tuning a model, needing a fraction of the power and time "real" fine-tuning does. But it still takes a lot of power, the most optimized Dreambooth tools take 12GB of VRAM and most graphics cards don't even have that. - There's several competitors to the Dreambooth method, such as EveryDream, which claim better results and sometimes claim only needing one photo. I'm not sure how things have really played out, especially since you can't tell the difference between "this method sucks" and "this method is great but everyone is using it wrong". - Hypernetworks take less time and power than DB. some testify it's better for style training than DreamBooth - TI's take the least time and power, I recently saw a training method for 6 GB VRAM cards. - Aesthetic gradients don't need training! :)
- Can't clone from Huggingface?
- Huggingface cloning not working, more info inside
-
Was told to crosspost here. My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images. Questions and advice welcome!
I BLIP captioned the images to try and retrain using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb. I used the BLIP captions, and then put "D&D character {race}" in front, where race was the manually annotated race I did. After that, for dreambooth (followed this roughly: https://www.youtube.com/watch?v=7bVZDeGPv6I) You don't need to rename the images, just put them in the same folder, which you specify in a json file that Dreambooth reads to know how to handle each class
-
My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images
Trained a Dreambooth model from v1.5 checkpoint. I tried finetuning the model using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb, but I didn't achieve results I liked
-
How To Fine Tune Stable Diffusion: Naruto Character Edition
Thank you! This model training did not use dreambooth. Here is the reference repo I used, it is based on the original training repo for stable diffusion. Dreambooth is a more sophisticated framework and I am very interested in doing a side-by-side comparison against this model as a follow-up.
- [D] DreamBooth Stable Diffusion training now possible in 24GB GPUs, and it runs about 2 times faster.
-
[P] Stable Diffusion finetuned on Pokemon!
Code and details: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
artbot-for-stable-diffusion
- ArtBot for Stable Diffusion
-
Show HN: I have created a free text-to-image website that supports SDXL Turbo
I am going to plug artbot, an actually free SD image generator: https://tinybots.net/artbot
The front end is all local, and the backend image generation run on volunteer hosts running on the AI Hoard. I donate my own 3090/2060 sometimes (albeit for text generation, not imagegen).
-
NSFW AI Tools
No mention of the AI Horde anywhere?
https://tinybots.net/artbot
https://lite.koboldai.net/
- Run LLMs at home, BitTorrent‑style
-
Stability AI releases its latest image-generating model, Stable Diffusion XL 1.0
This AI Horde UI has, IMO, some really good templates and suggestions:
https://tinybots.net/artbot
- SDXL 1.0 Release Candidate now in rotation on the AI Horde
-
A prompt without a subject!
Tried just now and this is the result, i haven't clear what is the LSDR 3.75 parameter, im using https://tinybots.net/artbot
-
[DISCUSSION] The delegitimization of AI art is nothing new...
Try either of these, they in theory would work on your phone or PC https://aqualxx.github.io/stable-ui/ https://tinybots.net/artbot
-
Useful Links
ArtBot
- Sci-Fi posters
What are some alternatives?
stable-diffusion
unprompted - Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.
stable-diffusion-webui - Stable Diffusion web UI
civitai - A repository of models, textual inversions, and more
ControlNet - Let us control diffusion models!
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
scribble-diffusion - Turn your rough sketch into a refined image using AI
stable-diffusion-webui-colab - stable diffusion webui colab
OnnxDiffusersUI - UI for ONNX based diffusers
Diffusion-ColabUI - Choose your diffusion models and spin up a WebUI on Colab in one click
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
AI-Horde - A crowdsourced distributed cluster for AI art and text generation