lora-instruct
punica
lora-instruct | punica | |
---|---|---|
1 | 1 | |
97 | 831 | |
- | 5.5% | |
7.0 | 8.7 | |
5 months ago | 14 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lora-instruct
-
Training a LoRA with MPT Models
Hi, i have created custom data, same format as alphachas json file. And fine tuned mpt-7b-instruct using this link https://github.com/leehanchung/lora-instruct I have also used your patch, the fine tuning got successfull and also the loss got decreased but when am trying to make prediction using the fine tuned model am not getting correct output even on the trained data, it's generating output with lots of nonsense
punica
What are some alternatives?
FastLoRAChat - Instruct-tune LLaMA on consumer hardware with shareGPT data
llama-peft-tuner - Tune LLaMa-7B on Alpaca Dataset using PEFT / LORA Based on @zphang's https://github.com/zphang/minimal-llama scripts.
AutoLearn-GPT - ChatGPT learns automatically.
LongLoRA - Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
LMOps - General technology for enabling AI capabilities w/ LLMs and MLLMs
mpt-lora-patch - Patch for MPT-7B which allows using and training a LoRA
Zicklein - Finetuning instruct-LLaMA on german datasets.
Roy - Roy: A lightweight, model-agnostic framework for crafting advanced multi-agent systems using large language models.
LLM-Finetuning-Hub - Toolkit for fine-tuning, ablating and unit-testing open-source LLMs. [Moved to: https://github.com/georgian-io/LLM-Finetuning-Toolkit]