fast-stable-diffusion
EveryDream
fast-stable-diffusion | EveryDream | |
---|---|---|
239 | 13 | |
7,316 | 219 | |
- | - | |
8.6 | 3.3 | |
20 days ago | 9 months ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fast-stable-diffusion
-
Working Colab notebooks for training Dreambooth?
I tried using TheLastBen's fast dreambooth trainer. I managed to train a ckpt file but I can't run it.
-
Running AUTOMATIC1111 on Google Colab
You have a colab from ThelastBen It uses to be thes best at the time when auto1111 was working in google colab free. https://github.com/TheLastBen/fast-stable-diffusion
- Stability AI releases its latest image-generating model, Stable Diffusion XL 1.0
-
Google Colab disconnects after 5 mins of hosting A1111
Using https://github.com/TheLastBen/fast-stable-diffusion
-
I'm kinda new to all of this and just wanted to ask... How can I fix something like this? Tried inpaint but didn't work even after changing parameters and img2img make it lose quality...
This repo offers a template how to start with SD on runpod https://github.com/TheLastBen/fast-stable-diffusion. But I know how to code, si I made my own solution.
-
Unable to use ControlNet on AUTO1111 GUI - Google Colab Notebook
I can confirm I'm using the latest version of the colab notebook of this repo (https://github.com/TheLastBen/fast-stable-diffusion). Anyone can point to any solutions to this problem? Thanks in advance!
- Automatic 1111 not working
-
Useful Links
TheLastBen's Fast DB SD Colabs, +25-50% speed increase, AUTOMATIC1111 + DreamBooth
-
Can you use other base model to train your own model with TheLastBen or ShivamShrirao collab?
CalledProcessError Traceback (most recent call last) in () 182 wget.download('https://github.com/TheLastBen/fast-stable-diffusion/raw/main/Dreambooth/det.py') 183 print('Detecting model version...') --> 184 Custom_Model_Version=check_output('python det.py '+sftnsr+' --MODEL_PATH '+MODEL_PATH, shell=True).decode('utf-8').replace('\n', '') 185 clear_output() 186 print(''+Custom_Model_Version+' Detected')
-
How to Install and Run Stable Diffusion in Automatic1111 with Deforum in Google Collab?
have you tried https://github.com/TheLastBen/fast-stable-diffusion ?
EveryDream
- Editing BLIP captions for textual inversion training - repetition of subject OK?
-
Tips for Dreambooth training at higher res
Before I keep experimenting, any prior art here or tips? I considered splitting the high res squares up into 512x512s manually and training with them alongside the full picture 512x512s (maybe with "close up" added to caption?), but that's yet more work. Taking a look at https://github.com/victorchall/EveryDream to see if it might be a better fit.
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
In terms of training, you've got some really good links and comments to youtube tutorials, but if you're interested in more information about finetuning a model (as opposed to training from scratch), this is a good repo that has a lot of tools for finetuning, including an auto-captioner using BLIP and automatic file renaming. This is the actual finetuning repo.
- Advanced advice for model training / fine-tuning and captioning
-
What is the advantage of stable diffusion 2.X? 1.5 seems better in most ways.
Then I either use EveryDream's autocaptioner or the BLIP autocaptioner in the webui, use EveryDream's filename replacer to remove references to "a painting of", manually correct or touch up the captions that need it, then fire up the training tab in the webui.
-
Merge or train, what is the best option?
Oh yeah, very good results. Using victorchall/EveryDream: Advanced fine tuning tools for vision models (github.com) I was able to train 12 subjects into ProtogenX5.3 with 25 images each (~300) + another ~300 "ground truth" images from LAION-5B and ffhq. Everydream requires you to caption every image, which is quite time consuming, but the results are pretty good. As with everything AI there's a lot of trial and error, though, especially when it comes to figuring out when the model's "done".
-
New Photorealistic Model: Dreamlike Photoreal 2.0 (Link in the comments!)
Probably something like this: https://github.com/victorchall/EveryDream
-
seek.art MEGA - a new general model for Stable Diffusion. ckpt included.
I use EveryDream. It is pretty technical to set up, but probably the only way to go if you're interested in large scale model training. It should work well for small scale stuff too, however I haven't done as much with that yet. I've mostly used one of the various dreambooth repos for single subject training.
-
How to train your AI!
There's a variety of methods that you could try, the basic one could be using textual inversion, you could find a lot of well explained tutorials on YouTube. Not so many people use it due to dreambooth and hypernetworks are more accessible but the option is there and you could use as a last resort. The next and probably the most popular one is dreambooth, again there's a lot of well explained tutorials on YouTube and depending on your situation (if you have a powerful GPU or you are willing to pay 0.33$/hr to rent a 3090) you could find alternatives like TheLastBen Colab Notebook version that allows you to train for free on Google Colab (with anime related stuff I personally didn't have to much success but I see other people having decent results, personally I get better results using Joe Penna but for you use case I think you may want to try other option) Next one are hypernetworks, easy to train using automatic1111, again a lot of tutorials on YouTube. From my experience it's a 50/50 I haven't tried that much so a have nothing more to say Last one and I think it might be more suitable for you but it requires more manual work than the other ones and I haven't seen any YouTube tutorial so you have to figure out things for yourself but is EveryDream. You can see it like Dreambooth+Textual inversion, like I said it requires a lot more work and a powerful GPU like Joe Penna repo but I think it's the best to train multiple stuff without making the model "forget" what it originally knows. You can see an example of how good it is in this other post
- Made in Abyss dreambooth model I am working on
What are some alternatives?
DeepFaceLab - DeepFaceLab is the leading software for creating deepfakes.
EveryDream-trainer - General fine tuning for Stable Diffusion
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
stable-diffusion-tensorflow - Stable Diffusion in TensorFlow / Keras
stylegan3-detector
efficient-dreambooth - [Moved to: https://github.com/smy20011/dreambooth-docker]
kohya_ss
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
embedding-inspector - Embedding-inspector extension for AUTOMATIC1111/stable-diffusion-webui
stable-diffusion - A latent text-to-image diffusion model
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning