- minimal-llama VS FlexGen
- minimal-llama VS visual-chatgpt
- minimal-llama VS whisper.cpp
- minimal-llama VS simple-llm-finetuner
- minimal-llama VS alpaca-lora
- minimal-llama VS GPTQ-for-LLaMa
- minimal-llama VS text-generation-webui
- minimal-llama VS WebChatRWKVstic
- minimal-llama VS OpenChatKit
- minimal-llama VS dalai
Minimal-llama Alternatives
Similar projects and alternatives to minimal-llama
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
langchain
Discontinued ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain] (by hwchase17)
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
visual-chatgpt
Discontinued Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
minimal-llama reviews and mentions
- Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
-
Visual ChatGPT
I can't edit my comment now, but it's 30B that needs 18GB of VRAM.
LLaMA-13B, GPT-3 175B level, only needs 10GB of VRAM with the GPTQ 4bit quantization.
>do you think there's anything left to trim? like weight pruning, or LoRA, or I dunno, some kind of Huffman coding scheme that lets you mix 4-bit, 2-bit and 1-bit quantizations?
Absolutely. The GPTQ paper claims negligible output quality loss with 3-bit quantization. The GPTQ-for-LLaMA repo supports 3-bit quantization and inference. So this extra 25% savings is already possible.
As of right GPTQ-for-LLaMA is using a VRAM hungry attention method. Flash attention will reduce the requirements for 7B to 4GB and possibly fit 30B with a 2048 context window into 16GB, all before stacking 3-bit.
Pruning is a possibility but I'm not aware of anyone working on it yet.
LoRa has already been implemented. See https://github.com/zphang/minimal-llama#peft-fine-tuning-wit...
Stats
The primary programming language of minimal-llama is Python.
Popular Comparisons
Sponsored