OpenChatKit
minimal-llama | OpenChatKit | |
---|---|---|
4 | 23 | |
456 | 8,998 | |
- | 0.0% | |
8.5 | 7.1 | |
7 months ago | 25 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
minimal-llama
- Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
-
Visual ChatGPT
I can't edit my comment now, but it's 30B that needs 18GB of VRAM.
LLaMA-13B, GPT-3 175B level, only needs 10GB of VRAM with the GPTQ 4bit quantization.
>do you think there's anything left to trim? like weight pruning, or LoRA, or I dunno, some kind of Huffman coding scheme that lets you mix 4-bit, 2-bit and 1-bit quantizations?
Absolutely. The GPTQ paper claims negligible output quality loss with 3-bit quantization. The GPTQ-for-LLaMA repo supports 3-bit quantization and inference. So this extra 25% savings is already possible.
As of right GPTQ-for-LLaMA is using a VRAM hungry attention method. Flash attention will reduce the requirements for 7B to 4GB and possibly fit 30B with a 2048 context window into 16GB, all before stacking 3-bit.
Pruning is a possibility but I'm not aware of anyone working on it yet.
LoRa has already been implemented. See https://github.com/zphang/minimal-llama#peft-fine-tuning-wit...
OpenChatKit
- OpenChatKit - OSS Framework for building chatbots
-
How should I get an in-depth mathematical understanding of generative AI?
ChatGPT isn't open sourced so we don't know what the actual implementation is. I think you can read Open Assistant's source code for application design. If that is too much, try Open Chat Toolkit's source code for developer tools . If you need very bare implementation, you should go for lucidrains/PaLM-rlhf-pytorch.
- OpenChatKit
- OpenChatKit: Open-source kit for setting up a local, libre, LLM chatbot
-
I created a locally-run ai assistant for UE5’s documentation
For a locally run open source option, I'd recommend taking a look at OpenChatKit. It's built on top of a couple different open source LLMs that have been fine-tuned for use as chatbots. I've only messed around with the online demo a little bit, but from what I've read it is supposed to run on a laptop and be almost as good as ChatGPT 3.5.
-
[D] Are there any MIT licenced (or similar) open-sourced instruction-tuned LLMs available?
OpenChatKit https://github.com/togethercomputer/OpenChatKit
-
[D] Is there currently anything comparable to the OpenAI API?
Togethercomputer released openchatkit a few weeks ago. Not tested it but looks promising https://github.com/togethercomputer/OpenChatKit
What are some alternatives?
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
visual-chatgpt - Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]
roomGPT - Upload a photo of your room to generate your dream room with AI.
whisper.cpp - Port of OpenAI's Whisper model in C/C++
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
simple-llm-finetuner - Simple UI for LLM Model Finetuning
wik - wik is use to get information about anything on the shell using Wikipedia.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
minChatGPT - A minimum example of aligning language models with RLHF similar to ChatGPT