multi-lora-fine-tune
LLaMA-LoRA-Tuner
multi-lora-fine-tune | LLaMA-LoRA-Tuner | |
---|---|---|
1 | 6 | |
182 | 425 | |
15.9% | - | |
9.3 | 7.9 | |
9 days ago | 12 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multi-lora-fine-tune
-
Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience?
I want to train a Code LLaMA on some data, and I am looking for a Framework or Technique to train this on my PC with a 3090 Ti in it. In my research, I stumbled across the paper "ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU" https://arxiv.org/abs/2312.02515 with this GitHub project: https://github.com/TUDB-Labs/multi-lora-fine-tune.
LLaMA-LoRA-Tuner
- [P] Uptraining a pretrained model using company data?
- (HELP) Token Issue on Generation
- Help with Random Characters and Words on Output
-
Fine-tuning LLaMA for research without Meta license
I would like to fine-tune LLaMA using this tuner for a research paper, but I am wondering if it is legal to do so. If it isn't, does anyone have suggestions for alternatives which are similarly user-friendly as the one above, since I am not a good programmer? Any advice would be greatly appreciated, thank you!
-
Why run LLMs locally?
The bad news is that, as far as I know, it does require a GPU. The good news is that I've gotten training done with a 7b model on both google colab and kaggle with free accounts. Both have 'just' enough vram to make it work as long as you use load the model in 8bit. Like --load-in-8bit on the command line with oobabooga. The Lora Tuner frontend even has a colab notebook set up to simplify things even more. Though the frontend keeps the LoRA Rank and LoRA Alpha values capped pretty low. Thankfully that's just set in the GUI though. I think it was one of the files in its UI directory. Pretty easy to just hand edit it to allow for higher values if desired.
- How can I train my custom dataset on top of Vicuna?
What are some alternatives?
unsloth - Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
Finetune_LLMs - Repo for fine-tuning Casual LLMs
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
Anima - 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
CodeCapypara - [Moved to: https://github.com/FSoft-AI4Code/CodeCapybara]
BELLE - BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
lora - Train Large Language Models (LLM) using LoRA
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
simple-llm-finetuner - Simple UI for LLM Model Finetuning
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Zicklein - Finetuning instruct-LLaMA on german datasets.