EveryDream2trainer
InvokeAI
EveryDream2trainer | InvokeAI | |
---|---|---|
48 | 240 | |
766 | 22,115 | |
- | 3.8% | |
9.0 | 10.0 | |
7 days ago | 4 days ago | |
Python | TypeScript | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
EveryDream2trainer
- Question on SD Finetuning
-
80% Completed!
I'm using EveryDream2 with SD v1 based models. You can define whatever resolution you want for training, as long as your Vram allows it.
-
Freedom. Finetuned 2.1 that can gen (+1024x) and often without negative. Release this or next week. Demo available for testing in next days. Here are some creations from a closed beta that I released on Twitter yesterday. 20 people, 3 hours, 1500 gens. I hope you enjoy. More on imgur album.
There is a demo optimizers settings here that uses special settings for the text encoder: https://github.com/victorchall/EveryDream2trainer/blob/main/optimizerSD21.json
- Can we clear up the regularization images concept once and for all?
-
Are there any recent, or still relevant, tutorials on training LoRAs within Dreambooth? Any specific / special settings to take advantage of my 4090?
If you have a large dataset of pictures, I'd recommend https://github.com/victorchall/EveryDream2trainer instead of Dreambooth. It has decent documentation (well for an open source project that is) and it has a very nice validation feature (disabled by default) which actually gives you good feedback on how the training is progressing.
-
Train a model from 300k images?
EveryDream2 can handle this. They even have tools to help you autocaption using BLIP. https://github.com/victorchall/EveryDream2trainer
- [Dreambooth] The docs for this Dreambooth-like trainer, Everydream2
- Resources for artists interesting in using StableDiffusion as a tool?
- Is Joe Penna's DreamBooth still the best option for training photorealistic persons or faces?
-
Can we identify most Stable Diffusion Model issues with just a few circles?
I recoment EveryDream2 for training, it has a lot of nice features. I'm not sure there is a proper manual to learn how to train, but there is a lot of information available. I have been learning this subjects for a few months myself.
InvokeAI
-
Why YC Went to DC
You're correct if you're focused exclusively on the work surrounding building foundation models to begin with. But if you take a broader view, having open models that we can legally fine tune and hack with locally has created a large and ever-growing community of builders and innovators that could not exist without these open models. Just take a look at projects like InvokeAI [0] in the image space or especially llama.cpp [1] in the text generation space. These projects are large, have lots of contributors, move very fast, and drive a lot of innovation and collaboration in applying AI to various domains in a way that simply wouldn't be possible without the open models.
[0] https://github.com/invoke-ai/InvokeAI
[1] https://github.com/ggerganov/llama.cpp
-
Stable Diffusion 3
Probably not, since I have no idea what you're talking about. I've just been using the models that InvokeAI (2.3, I only just now saw there's a 3.0) downloads for me [0]. The SD1.5 one is as good as ever, but the SD2 model introduces artifacts on (many, but not all) faces and copyrighted characters.
[0] https://github.com/invoke-ai/InvokeAI
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I actually used the rocm/pytorch image you also linked.
I'm not sure what you're pointing to with your reference to the Fedora-based images. I'm quite happy with my NixOS install and really don't want to switch to anything else. And as long as I have the correct kernel module, my host OS really shouldn't matter to run any of the images.
And I'm sure it can be made to work with many base images, my point was just that the dependency management around pytorch was in a bad state, where it is extremely easy to break.
> Anyways, hopefully this PR fixes the immediate issue: https://github.com/invoke-ai/InvokeAI/pull/5714/files
It does! At least for me. It is my PR after all ;)
-
Can some expert analyze a github repo and tell us if it's really safe or not?
The data being flagged is not in that github repo, it's fetched from elsewhere and I don't fancy spending time looking for it. The alert is for 'Sirefef!cfg' which has been reported as a false positive with a bunch of other stable diffusion projects (https://www.reddit.com/r/StableDiffusion/comments/101zjec/trojanwin32sirefefcfg_an_apparently_common_false/, https://www.reddit.com/r/StableDiffusion/comments/xmhukb/trojan_in_waifudiffusion_model_file/, https://github.com/invoke-ai/InvokeAI/issues/2773 )
-
What is the most effcient port of SD to mac?
I haven’t tried it recently, but InvokeAI runs on Mac. Invoke. I used to run on my MacBook, but have since gotten a Win laptop.
-
Easy Stable Diffusion XL in your device, offline
There are already a number of local, inference options that are (crucially) open-source, with more robust feature sets.
And if the defense here is "but Auto1111 and Comfy don't have as user-friendly a UI", that's also already covered. https://github.com/invoke-ai/InvokeAI
-
Ask HN: Selfhosted ChatGPT and Stable-diffusion like alternatives?
https://github.com/invoke-ai/InvokeAI should work on your machine. For LLM models, the smaller ones should run using llama.cpp, but I don't think you'll be happy comparing them to ChatGPT.
- 🚀 InvokeAI 3.4 now supports LCM & LCM-LoRAs and much more!
-
Best ai image generator without a nsfw filter?
Stable Diffusion. /r/stablediffusion There are many tutorials on how to set it up locally and use it. InvokeAI is the easiest way to set it up. https://github.com/invoke-ai/InvokeAI
-
What's the best stable diffusion client for base m1 MacBook air?
InvokeAI
What are some alternatives?
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-diffusion-webui - Stable Diffusion web UI
StableTuner - Finetuning SD in style.
stable-diffusion
sd-scripts
ControlNet - Let us control diffusion models!
EveryDream-trainer - General fine tuning for Stable Diffusion
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
dreambooth-gui
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM