punica
FastLoRAChat
punica | FastLoRAChat | |
---|---|---|
1 | 2 | |
817 | 119 | |
3.9% | - | |
8.7 | 7.2 | |
7 days ago | about 1 year ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
punica
FastLoRAChat
-
[P] FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data
Announcing FastLoRAChat , training chatGPT without A100.
- FastLoRAChat – Lora finetuned LLM with ChatGPT capabality
What are some alternatives?
llama-peft-tuner - Tune LLaMa-7B on Alpaca Dataset using PEFT / LORA Based on @zphang's https://github.com/zphang/minimal-llama scripts.
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
LongLoRA - Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
lora-instruct - Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
hyde - HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
Zicklein - Finetuning instruct-LLaMA on german datasets.
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
LLM-Finetuning-Hub - Toolkit for fine-tuning, ablating and unit-testing open-source LLMs. [Moved to: https://github.com/georgian-io/LLM-Finetuning-Toolkit]
llama2-haystack - Using Llama2 with Haystack, the NLP/LLM framework.
gpt-j-fine-tuning-example - Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
alpaca-lora - Instruct-tune LLaMA on consumer hardware