sd-scripts
bitsandbytes-rocm
sd-scripts | bitsandbytes-rocm | |
---|---|---|
64 | 4 | |
4,253 | 38 | |
- | - | |
9.7 | 8.8 | |
about 20 hours ago | 12 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-scripts
- Everything you know about loss is a lie
- Evidence that LoRA extraction in Kohya is broken?
- Stable Diffusion XL (SDXL) DreamBooth training with EMA (Exponential Moving Average) on the way
-
Installing kohya_ss GUI on AWS
This repository mostly provides a Windows-focused Gradio GUI for Kohya's Stable Diffusion trainers... but support for Linux OS is also provided through community contributions.
- Question on SD Finetuning
-
Trying to put up a simple dreambooth for sdxl, but an errors pops up
Leaving this here because i'm very tired, so this is the file of the ipynb that uses the sdxl_train.py from the https://github.com/kohya-ss/sd-scripts/tree/sdxl repo, if anybody find out why when getting to the training i get this very empty error : " [00:09:11] WARNING The following values were not passed to "
-
Finally SDXL coming to the Automatic1111 Web UI
You can try and test training LoRAs now https://github.com/kohya-ss/sd-scripts/tree/sdxl
-
Help with LORA Training - Kohya_ss Regularization
This might help.
-
need a lora traning guide for linux
Kohya_ss sd-scripts Seems to be the standard for lora training. The linked page has an English translation, but doesn't really have system specific tips. Someone else has a popular gui for it, but it's designed with windows in mind. There's another, simpler gui, but its still in development and the dev doesn't do any testing on Linux. With any of these, I run into dependency conflicts like crazy.
-
SDXL 0.9 is wild but trying to imagine where we go from here is breaking my brain.
"Direct training" is already feasible with masking in kohya-ss: https://github.com/kohya-ss/sd-scripts/pull/589
bitsandbytes-rocm
-
Any methods to train using AMD?
Install dependencies like hipblas-devel hipsparse-devel hipcub-devel git python3.10 make libstdc++-devel accelerate, rocm and hip git clone https://github.com/bmaltais/kohya_ss && cd kohya_ss python3.10 -m venv venv source venv/bin/activate pip3 install torch==1.13.1 torchvision==0.14.1 torchtext==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/rocm5.2 # problems on 2.0.0 last I tried, but kohya gotten updates since then pip3 install --upgrade -r requirements.txt pip3 uninstall tensorflow && pip3 install tensorflow-rocm pip uninstall bitsandbytes && git clone https://github.com/broncotc/bitsandbytes-rocm # bitsandbytes not required if not using adam8? cd bitesandbytes-rocm && nano Makefile # Replace all 3 instances of 5.3.0 with 5.4.3 make hip python3 setup.py install
-
How to run Pygmalion on 4.5GB of VRAM with full context size.
There are a lot of ROCm versions of bitsandbytes. For example this one: https://github.com/broncotc/bitsandbytes-rocm The problem is compatibility with most of the requirements. Kobold does a better job than ooba in offering a more streamlined approach for AMD users.
- Is it possible to load a model in 8bit precision with an AMD card? (6700xt)
- Have you got running LoRA training on an AMD GPU?
What are some alternatives?
kohya_ss
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
sd_dreambooth_extension
GPTQ-for-LLaMa - 4 bits quantization of LLMs using GPTQ
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
LoRA_Easy_Training_Scripts - A UI made in Pyside6 to make training LoRA/LoCon and other LoRA type models in sd-scripts easy
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
EveryDream2trainer