FastLoRAChat
ReAct
FastLoRAChat | ReAct | |
---|---|---|
2 | 1 | |
119 | 1,619 | |
- | - | |
7.2 | 4.8 | |
about 1 year ago | 3 months ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FastLoRAChat
-
[P] FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data
Announcing FastLoRAChat , training chatGPT without A100.
- FastLoRAChat – Lora finetuned LLM with ChatGPT capabality
ReAct
What are some alternatives?
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
lora-instruct - Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
EasyEdit - An Easy-to-use Knowledge Editing Framework for LLMs.
hyde - HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
LLM-Training-Puzzles - What would you do with 1000 H100s...
llama2-haystack - Using Llama2 with Haystack, the NLP/LLM framework.
AutoCog - Automaton & Cognition
gpt-j-fine-tuning-example - Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
llm-search - Querying local documents, powered by LLM
alpaca-lora - Instruct-tune LLaMA on consumer hardware
mistral-src - Reference implementation of Mistral AI 7B v0.1 model.