LLaMA-LoRA-Tuner
koboldcpp
LLaMA-LoRA-Tuner | koboldcpp | |
---|---|---|
6 | 180 | |
425 | 3,951 | |
- | - | |
7.9 | 10.0 | |
12 months ago | 1 day ago | |
Python | C++ | |
- | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaMA-LoRA-Tuner
- [P] Uptraining a pretrained model using company data?
- (HELP) Token Issue on Generation
- Help with Random Characters and Words on Output
-
Fine-tuning LLaMA for research without Meta license
I would like to fine-tune LLaMA using this tuner for a research paper, but I am wondering if it is legal to do so. If it isn't, does anyone have suggestions for alternatives which are similarly user-friendly as the one above, since I am not a good programmer? Any advice would be greatly appreciated, thank you!
-
Why run LLMs locally?
The bad news is that, as far as I know, it does require a GPU. The good news is that I've gotten training done with a 7b model on both google colab and kaggle with free accounts. Both have 'just' enough vram to make it work as long as you use load the model in 8bit. Like --load-in-8bit on the command line with oobabooga. The Lora Tuner frontend even has a colab notebook set up to simplify things even more. Though the frontend keeps the LoRA Rank and LoRA Alpha values capped pretty low. Thankfully that's just set in the GUI though. I think it was one of the files in its UI directory. Pretty easy to just hand edit it to allow for higher values if desired.
- How can I train my custom dataset on top of Vicuna?
koboldcpp
- Any Online Communities on Local/Home AI?
- Koboldcpp-1.62.1 adds support for Command-R+
- Show HN: I made an app to use local AI as daily driver
-
Easiest way to show my model to my mom?
FYI this is the easiest way to host on the horde: https://github.com/LostRuins/koboldcpp
- IT Veteran... why am I struggling with all of this?
- What do you use to run your models?
- ByteDance AI researcher suggests that open source model more powerful than Gemini to be released soon
- i need some help guys
-
[Guide] How install KoboldAI in Android via Termux (Update 04-12-2023)
For more information of Koboldcpp look this guide: https://github.com/LostRuins/koboldcpp/wiki
-
SillyTavern 1.10.10 has been released
Out of curiosity, is there a specific reason for this? The most popular fork KoboldCpp is in active development, and was the first to adopt the Min P sampler, and even distincts itself with the context shift feature. Just wondering what this means for the future. Thanks!
What are some alternatives?
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
KoboldAI
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
CodeCapypara - [Moved to: https://github.com/FSoft-AI4Code/CodeCapybara]
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
BELLE - BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
KoboldAI - KoboldAI is generative AI software optimized for fictional use, but capable of much more!
lora - Train Large Language Models (LLM) using LoRA
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
simple-llm-finetuner - Simple UI for LLM Model Finetuning
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]