kohya_ss
sd-webui-lobe-theme
kohya_ss | sd-webui-lobe-theme | |
---|---|---|
132 | 77 | |
8,630 | 2,252 | |
- | 4.0% | |
9.8 | 9.2 | |
10 days ago | 4 days ago | |
Python | TypeScript | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kohya_ss
-
Some semi-advanced LoRA & kohya_ss questions
Many of the options are explained here https://github.com/bmaltais/kohya_ss/wiki/LoRA-training-parameters
-
Lora training with Kohya issue
training in BF16 might solve this issue from what I saw in this ticket. I know other people ran into the issue too https://github.com/bmaltais/kohya_ss/issues/1382
-
What is the best way to merge multiple loras in to one model?
for lycoris loras you can use the command-line script from the kohya-ss repo https://github.com/bmaltais/kohya_ss/blob/master/networks/merge_lora.py i have an older version checked out from late july, it had a separate merge_lycoris.py for for this purpose, it's probably unified now in a single file
- Evidence that LoRA extraction in Kohya is broken?
-
Merging Lora with Checkpoint Model?
I usually do that with kohya_ss, a tool made for making LoRAs and finetunes. It might be a bit of a pain to set up just to do this one task, but if nobody gives you an easier method, look into it. https://github.com/bmaltais/kohya_ss
-
How I got Kohya_SS working on Arch Linux, including an up-to-date pip requirements file
After that, make your staging directory, and do the git clone https://github.com/bmaltais/kohya_ss.git, and navigate inside of it. Now, here's where things can become a pain. I used pyenv to set my system level python to 3.10.6 with pyenv global 3.10.6, though you can probably just use "local" and do it for the current shell. You NEED it to be active however before you set up your venv. If you do python --version and get 3.10.6, you're ready for this next part. Make your venv with python -m venv venv. This is the simplest way, it'll create a virtual environment in your current folder named venv. You'll do a source venv/bin/activate and then do which python to make sure it's using the python from the venv. Now for the fun part. The included setup scripts have been flaky for me, so I just went through the requirements and installed everything by hand. I'm going to do this guide right now for nvidia, because I just got a 4090 for this stuff. If this ends up working well for others and there's demand, I'll try to reproduce this for AMD (But I'll be honest, I got an nvidia card because bitsandbytes doesn't have full rocm support, nor do most libraries, so it's not very reliable). After installing everything and testing it works at least at a basic level for dreambooth training, my finished requirements.txt for pip is as below:
-
The best open source LoRA model training tools
Earlier I created a post where I asked for recommendations for LoRA model training tutorials. The first one I looked at used the kohya_ss GUI. That GitHub repo already has two tutorials, which are quite good, so I ended up using those:
-
Script does...nothing
I have tried my best to research this issue and have not come up with much. It is obvious that its a backend issue right? The guides that I used https://github.com/bmaltais/kohya_ss and https://github.com/pyenv-win/pyenv-win/
- Using LoRa on SDXL 1.0 (not using the Kohra GUIs)
-
How do I reduce the size of my Lora models?
I am training on a 12GB 3060 using kohya_ss. Is there a setting or something I'm missing?
sd-webui-lobe-theme
-
Upscayl – Free and Open Source AI Image Upscaler
upscayl is very approachable, but lacked many features i needed. i ended up using https://github.com/AUTOMATIC1111/stable-diffusion-webui after upscaling became part of my regular workflow, but for someone who just needs a few images enhanced, it's an ideal tool.
-
The Basics of AI Image Generation: How to create your own AI-generated image using Stable Diffusion on your local machine.
For the Git alternative, simply right-click on the location you want to put the Stable Diffusion and select “Git Bash Here”, then paste this on the CLI: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Stable Cascade
ComfyUI is similar to Houdini in complexity, but immensely powerful. It's a joy to use.
There are also a large amount of resources available for it on YouTube, GitHub (https://github.com/comfyanonymous/ComfyUI_examples), reddit (https://old.reddit.com/r/comfyui), CivitAI, Comfy Workflows (https://comfyworkflows.com/), and OpenArt Flow (https://openart.ai/workflows/).
I still use AUTO1111 (https://github.com/AUTOMATIC1111/stable-diffusion-webui) and the recently released and heavily modified fork of AUTO1111 called Forge (https://github.com/lllyasviel/stable-diffusion-webui-forge).
-
Show HN: I made a local wrapper for Automatic 1111
Seems like an interesting project. Regarding the name, is there permission to use something so similar to AUTOMATIC1111 [1]?
> Diffusers will Cuda out of memory/perform very slowly for huge generations, like 2048x2048 images, while Auto 1111 SDK won't.
Do we have some numbers on this? I have seen AUTOMATIC1111 fall-over whilst using only half the available of GPU VRAM - there seems to be some weirdness where it tries to allocate before de-allocating the last batch or something.
> You can use any of the 6 compatible RealEsrgran models/weights with our RealEsrgran pipeline for upscaling images. Here are the model ids:
I've previously had trouble trying to use AUTOMATIC1111 upscalers, it seems like it needs more GPU VRAM than just generating the image already upscaled.
[1] https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Stable Code 3B: Coding on the Edge
You might be thinking of Fooocus: https://github.com/lllyasviel/Fooocus
The Stable Diffusion web interface that got a lot of people's attention originally was Automatic1111: https://github.com/AUTOMATIC1111/stable-diffusion-webui
Fooocus is definitely more beginner friendly. It does a lot of the prompt engineering for you. Automatic1111 has a ton of plugins, most notably ControlNet which gives you fine grained control over the images, but there is a learning curve.
- Google Imagen 2
-
Free or "practically-free" Ai picture generator?
Stable Diffusion https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Things to do, to put my old PC to use?
Make it into a stable diffusion server!
-
GTA 6 trailer screencaps, photorealistic style
There's no link version, you have to run it locally. You install it from here
-
Automatic1111 v1.7.0-RC published
Repository: AUTOMATIC1111/stable-diffusion-webui · Tag: v1.7.0-RC · Commit: 48fae7c · Released by: AUTOMATIC1111
What are some alternatives?
sd_dreambooth_extension
stable-diffusion-webui - Stable Diffusion web UI
EveryDream-trainer - General fine tuning for Stable Diffusion
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
sd-scripts
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
stable-diffusion-webui-amdgpu - Stable Diffusion web UI
kohya_ss_colab - a (successful) attepmt to port kohya_ss to colab
stable-diffusion-webui-ux - Stable Diffusion web UI UX
LoRA_Easy_Training_Scripts - A UI made in Pyside6 to make training LoRA/LoCon and other LoRA type models in sd-scripts easy
stable-diffusion-webui-colab - stable diffusion webui colab