dalai VS minimal-llama

Compare dalai vs minimal-llama and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
dalai minimal-llama
59 4
13,060 456
- -
6.5 8.5
5 months ago 7 months ago
CSS Python
- -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

dalai

Posts with mentions or reviews of dalai. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-14.

minimal-llama

Posts with mentions or reviews of minimal-llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-21.
  • Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
    16 projects | news.ycombinator.com | 21 Mar 2023
  • Visual ChatGPT
    8 projects | news.ycombinator.com | 9 Mar 2023
    I can't edit my comment now, but it's 30B that needs 18GB of VRAM.

    LLaMA-13B, GPT-3 175B level, only needs 10GB of VRAM with the GPTQ 4bit quantization.

    >do you think there's anything left to trim? like weight pruning, or LoRA, or I dunno, some kind of Huffman coding scheme that lets you mix 4-bit, 2-bit and 1-bit quantizations?

    Absolutely. The GPTQ paper claims negligible output quality loss with 3-bit quantization. The GPTQ-for-LLaMA repo supports 3-bit quantization and inference. So this extra 25% savings is already possible.

    As of right GPTQ-for-LLaMA is using a VRAM hungry attention method. Flash attention will reduce the requirements for 7B to 4GB and possibly fit 30B with a 2048 context window into 16GB, all before stacking 3-bit.

    Pruning is a possibility but I'm not aware of anyone working on it yet.

    LoRa has already been implemented. See https://github.com/zphang/minimal-llama#peft-fine-tuning-wit...

What are some alternatives?

When comparing dalai and minimal-llama you can also consider the following projects:

gpt4all - gpt4all: run open-source LLMs anywhere

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

visual-chatgpt - Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]

llama - Inference code for Llama models

whisper.cpp - Port of OpenAI's Whisper model in C/C++

alpaca-lora - Instruct-tune LLaMA on consumer hardware

simple-llm-finetuner - Simple UI for LLM Model Finetuning

llama.cpp - LLM inference in C/C++

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ