alpaca-lora VS minimal-llama

Compare alpaca-lora vs minimal-llama and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
alpaca-lora minimal-llama
107 4
18,137 457
- -
3.6 8.5
about 2 months ago 6 months ago
Jupyter Notebook Python
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

alpaca-lora

Posts with mentions or reviews of alpaca-lora. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-11.

minimal-llama

Posts with mentions or reviews of minimal-llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-21.
  • Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
    16 projects | news.ycombinator.com | 21 Mar 2023
  • Visual ChatGPT
    8 projects | news.ycombinator.com | 9 Mar 2023
    I can't edit my comment now, but it's 30B that needs 18GB of VRAM.

    LLaMA-13B, GPT-3 175B level, only needs 10GB of VRAM with the GPTQ 4bit quantization.

    >do you think there's anything left to trim? like weight pruning, or LoRA, or I dunno, some kind of Huffman coding scheme that lets you mix 4-bit, 2-bit and 1-bit quantizations?

    Absolutely. The GPTQ paper claims negligible output quality loss with 3-bit quantization. The GPTQ-for-LLaMA repo supports 3-bit quantization and inference. So this extra 25% savings is already possible.

    As of right GPTQ-for-LLaMA is using a VRAM hungry attention method. Flash attention will reduce the requirements for 7B to 4GB and possibly fit 30B with a 2048 context window into 16GB, all before stacking 3-bit.

    Pruning is a possibility but I'm not aware of anyone working on it yet.

    LoRa has already been implemented. See https://github.com/zphang/minimal-llama#peft-fine-tuning-wit...

What are some alternatives?

When comparing alpaca-lora and minimal-llama you can also consider the following projects:

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

qlora - QLoRA: Efficient Finetuning of Quantized LLMs

visual-chatgpt - Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]

llama.cpp - LLM inference in C/C++

whisper.cpp - Port of OpenAI's Whisper model in C/C++

gpt4all - gpt4all: run open-source LLMs anywhere

simple-llm-finetuner - Simple UI for LLM Model Finetuning

llama - Inference code for Llama models

ggml - Tensor library for machine learning

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ