OpenGPT
FastLoRAChat
OpenGPT | FastLoRAChat | |
---|---|---|
3 | 2 | |
322 | 119 | |
2.5% | - | |
5.8 | 7.2 | |
about 1 year ago | about 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OpenGPT
- GitHub - CogStack/OpenGPT: A framework for creating grounded instruction based datasets and training conversational domain expert Large Language Models (LLMs).
-
Training an LLM as a hobbyist.
This could be helpful https://github.com/CogStack/OpenGPT together with the accompanying jupyter notebook that shows how to fine-tune an existing LLM and make a mini ChatGPT. You will need to make some changes if you want to use your own dataset, but it should be fairly easy.
-
[P] A Large Language Model for Healthcare | NHS-LLM and OpenGPT
GitHub: https://github.com/CogStack/opengpt Blog: https://aiforhealthcare.substack.com/p/a-large-language-model-for-healthcare
FastLoRAChat
-
[P] FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data
Announcing FastLoRAChat , training chatGPT without A100.
- FastLoRAChat – Lora finetuned LLM with ChatGPT capabality
What are some alternatives?
paper-qa - LLM Chain for answering questions from documents with citations
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
lora-instruct - Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
hyde - HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
llama2-haystack - Using Llama2 with Haystack, the NLP/LLM framework.
gpt-j-fine-tuning-example - Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
alpaca-lora - Instruct-tune LLaMA on consumer hardware
Anima - 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
punica - Serving multiple LoRA finetuned LLM as one
chatgpt-comparison-detection - Human ChatGPT Comparison Corpus (HC3), Detectors, and more! 🔥