LyCORIS
alpaca_lora_4bit
LyCORIS | alpaca_lora_4bit | |
---|---|---|
13 | 41 | |
1,991 | 529 | |
- | - | |
9.6 | 8.6 | |
3 days ago | 6 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LyCORIS
-
LoRA (LyCORIS) iA3 is amazing (info in 1st comment)
Lycoris is another implementation of LoRA done by KohakuBlueleaf: https://github.com/KohakuBlueleaf/LyCORIS
-
Training LORAs locally guide in text form?
Most guides focus on LoRa training as that has been around for longer. But I think LoHa can give better results. But the training is about half as fas it/s and it requires different training settings.
-
Guide to DreamBooth / LORA / LyCORIS
I've read in some tutorials that it is best that the value should be 64 or below, also here they suggest to not go over 64 ( https://github.com/KohakuBlueleaf/LyCORIS )
-
LyCORIS doesn't work with inpainting models
Does anyone know how to make LyCORIS models (https://github.com/KohakuBlueleaf/LyCORIS) work with inpainting models?
- wtf is a lycoris?
- I wonder what to do with this?
-
I'm the creator of LoRA. How can I make it better?
I think it was linked already but this is also relevant for LoRa: https://github.com/KohakuBlueleaf/LyCORIS Nice work!
-
LoRA: Low-Rank Adaptation of Large Language Models
There are some WIP evolutions of SD Lora in the works, like locon and lycoris.
https://github.com/KohakuBlueleaf/LyCORIS
- What the hell is a Locon/Loha model?
-
SD fine-tuning methods compared: a benchmark
You might want to expand LoRA to include LoCon and LoHa, (and also add a column for VRAM requirements) (Think of it as a more complete LoRA that works for the kernels in the convolutional units rather than just the weights for the feed-forward network), support is still quite limited, but it's starting to pick up steam https://github.com/KohakuBlueleaf/LyCORIS
alpaca_lora_4bit
-
Open Inference Engine Comparison | Features and Functionality of TGI, vLLM, llama.cpp, and TensorRT-LLM
For training there is also https://github.com/johnsmith0031/alpaca_lora_4bit
-
Quantized 8k Context Base Models for 4-bit Fine Tuning
I've been trying to fine tune an erotica model on some large context chat history (reverse proxy logs) and a literotica-instruct dataset I made, with a max context of 8k. The large context size eats a lot of VRAM so I've been trying to find the most efficient way to experiment considering I'd like to do multiple runs to test some ideas. So I'm going to try and use https://github.com/johnsmith0031/alpaca_lora_4bit, which is supposed to train faster and use less memory than qlora.
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
-
Does we still need monkey patch with exllama loader for lora?
" Using LoRAs with GPTQ-for-LLaMa This requires using a monkey patch that is supported by this web UI: https://github.com/johnsmith0031/alpaca_lora_4bit"
-
Why isn’t QLoRA being used more widely for fine tuning models?
4-bit GPTQ LoRA training was available since early April. I did not see any comparison to it in the QLoRA paper or even a mention, so it makes me think they were not aware it already existed.
- Fine-tuning with alpaca_lora_4bit on 8k context SuperHOT models
-
Any guide/intro to fine-tuning anywhere?
https://github.com/johnsmith0031/alpaca_lora_4bit is still the SOTA - Faster than qlora, trains on a GPTQ base.
-
"Samantha-33B-SuperHOT-8K-GPTQ" now that's a great name for a true model.
I would also like to know how one would finetune this in 4 bit? I think one could take the merged 8K PEFT with the LLaMA weights, and then quantize it to 4 bit, and then train with https://github.com/johnsmith0031/alpaca_lora_4bit ?
-
Help with QLoRA
I was under the impression that you just git clone this repo into text-generation-webui/repositories (so you would have GPTQ_for_Llama and alpaca_lora_4bit in the folder), and then just load with monkey patch. Is that not correct? I also tried just downloading alpaca_lora_4bit on its own, git cloning text-gen-webui within it, and installing requirements.txt for both and running with monkey patch. I was following the sections of alpaca_lora_4bit, "Text Generation Webui Monkey Patch" and "monkey patch inside webui"
-
Best uncensored model for an a6000
I dont have any familiarity with esxi, but I can say that there are quite a few posts about people doing it on proxmox. I've currently got a machine with 2x3090 passing through to VM's. When I'm training, I pass them both through to the same VM and can do lora 4-bit training on llama33 using https://github.com/johnsmith0031/alpaca_lora_4bit. Then, at inference time, I run a single card into a different VM, and have an extra card available for experimentation.
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
flash-attention - Fast and memory-efficient exact attention
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
StableLM - StableLM: Stability AI Language Models
sd-webui-additional-networks
safetensors - Simple, safe way to store and distribute tensors
kohya_ss
alpaca-lora - Instruct-tune LLaMA on consumer hardware
StableTuner - Finetuning SD in style.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.