CodeCapybara
LLaMA-LoRA-Tuner
CodeCapybara | LLaMA-LoRA-Tuner | |
---|---|---|
1 | 6 | |
156 | 425 | |
1.3% | - | |
5.9 | 7.9 | |
about 1 year ago | 12 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CodeCapybara
LLaMA-LoRA-Tuner
- [P] Uptraining a pretrained model using company data?
- (HELP) Token Issue on Generation
- Help with Random Characters and Words on Output
-
Fine-tuning LLaMA for research without Meta license
I would like to fine-tune LLaMA using this tuner for a research paper, but I am wondering if it is legal to do so. If it isn't, does anyone have suggestions for alternatives which are similarly user-friendly as the one above, since I am not a good programmer? Any advice would be greatly appreciated, thank you!
-
Why run LLMs locally?
The bad news is that, as far as I know, it does require a GPU. The good news is that I've gotten training done with a 7b model on both google colab and kaggle with free accounts. Both have 'just' enough vram to make it work as long as you use load the model in 8bit. Like --load-in-8bit on the command line with oobabooga. The Lora Tuner frontend even has a colab notebook set up to simplify things even more. Though the frontend keeps the LoRA Rank and LoRA Alpha values capped pretty low. Thankfully that's just set in the GUI though. I think it was one of the files in its UI directory. Pretty easy to just hand edit it to allow for higher values if desired.
- How can I train my custom dataset on top of Vicuna?
What are some alternatives?
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
CodeCapypara - [Moved to: https://github.com/FSoft-AI4Code/CodeCapybara]
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
BELLE - BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
lora - Train Large Language Models (LLM) using LoRA
ExpertLLaMA - An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
simple-llm-finetuner - Simple UI for LLM Model Finetuning
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.