alpaca-lora
LyCORIS
Our great sponsors
alpaca-lora | LyCORIS | |
---|---|---|
107 | 13 | |
18,167 | 1,966 | |
- | - | |
3.6 | 9.6 | |
2 months ago | 8 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpaca-lora
-
How to deal with loss for SFT for CausalLM
Here is a example: https://github.com/tloen/alpaca-lora/blob/main/finetune.py
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
Newbie here - trying to install a Alpaca Lora and hitting an error
Hi all - relatively new to GitHub / programming in general, and I wanted to try to set up Alpaca Lora locally. Following the guide here: https://github.com/tloen/alpaca-lora
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Converting to GGML?
If instead you want to apply a LoRa to a pytorch model, a lot of people use this script to apply to LoRa to the 16 bit model and then quantize it with a GPTQ program afterwards https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py
-
Simple LLM Watermarking - Open Lllama 3b LORA
There are a few papers on watermarking LLM output, but from what I have seen they all use complex methods of detection to allow the watermark to go unseen by the end user, only to be detected by algorithm. I believe that a more overt system of watermarking might also be beneficial. One simple method that I have tried is character substitution. For this model, I LORA finetuned openlm-research/open_llama_3b on the alpaca_data_cleaned_archive.json dataset from https://github.com/tloen/alpaca-lora/ modified by replacing all instances of the "." character in the outputs with a "ι" The results are pretty good, with the correct the correct substitutions being generated by the model in most cases. It doesn't always work, but this was only a LORA training and for two epochs of 400 steps each, and 100% substitution isn't really required.
-
text-generation-webui's "Train Only After" option
I am kind of new to finetuning LLM's and am not able to understand what this option exactly refers to. I guess it has the same meaning as the "train_on_inputs" parameter of alpacalora though.
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora
LyCORIS
-
LoRA (LyCORIS) iA3 is amazing (info in 1st comment)
Lycoris is another implementation of LoRA done by KohakuBlueleaf: https://github.com/KohakuBlueleaf/LyCORIS
-
Training LORAs locally guide in text form?
Most guides focus on LoRa training as that has been around for longer. But I think LoHa can give better results. But the training is about half as fas it/s and it requires different training settings.
-
Guide to DreamBooth / LORA / LyCORIS
I've read in some tutorials that it is best that the value should be 64 or below, also here they suggest to not go over 64 ( https://github.com/KohakuBlueleaf/LyCORIS )
-
LyCORIS doesn't work with inpainting models
Does anyone know how to make LyCORIS models (https://github.com/KohakuBlueleaf/LyCORIS) work with inpainting models?
- wtf is a lycoris?
- I wonder what to do with this?
-
I'm the creator of LoRA. How can I make it better?
I think it was linked already but this is also relevant for LoRa: https://github.com/KohakuBlueleaf/LyCORIS Nice work!
-
LoRA: Low-Rank Adaptation of Large Language Models
There are some WIP evolutions of SD Lora in the works, like locon and lycoris.
https://github.com/KohakuBlueleaf/LyCORIS
- What the hell is a Locon/Loha model?
-
SD fine-tuning methods compared: a benchmark
You might want to expand LoRA to include LoCon and LoHa, (and also add a column for VRAM requirements) (Think of it as a more complete LoRA that works for the kernels in the convolutional units rather than just the weights for the feed-forward network), support is still quite limited, but it's starting to pick up steam https://github.com/KohakuBlueleaf/LyCORIS
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
llama.cpp - LLM inference in C/C++
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
gpt4all - gpt4all: run open-source LLMs anywhere
sd-webui-additional-networks
llama - Inference code for Llama models
kohya_ss
ggml - Tensor library for machine learning
StableTuner - Finetuning SD in style.