FastLoRAChat
lora-instruct
FastLoRAChat | lora-instruct | |
---|---|---|
2 | 1 | |
119 | 97 | |
- | - | |
7.2 | 7.0 | |
about 1 year ago | 5 months ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FastLoRAChat
-
[P] FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data
Announcing FastLoRAChat , training chatGPT without A100.
- FastLoRAChat – Lora finetuned LLM with ChatGPT capabality
lora-instruct
-
Training a LoRA with MPT Models
Hi, i have created custom data, same format as alphachas json file. And fine tuned mpt-7b-instruct using this link https://github.com/leehanchung/lora-instruct I have also used your patch, the fine tuning got successfull and also the loss got decreased but when am trying to make prediction using the fine tuned model am not getting correct output even on the trained data, it's generating output with lots of nonsense
What are some alternatives?
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
AutoLearn-GPT - ChatGPT learns automatically.
hyde - HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
LMOps - General technology for enabling AI capabilities w/ LLMs and MLLMs
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
mpt-lora-patch - Patch for MPT-7B which allows using and training a LoRA
llama2-haystack - Using Llama2 with Haystack, the NLP/LLM framework.
Roy - Roy: A lightweight, model-agnostic framework for crafting advanced multi-agent systems using large language models.
gpt-j-fine-tuning-example - Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
punica - Serving multiple LoRA finetuned LLM as one
alpaca-lora - Instruct-tune LLaMA on consumer hardware
Anima - 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU