peft VS minimal-llama

Compare peft vs minimal-llama and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
peft minimal-llama
26 4
13,877 456
4.1% -
9.7 8.5
4 days ago 7 months ago
Python Python
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

peft

Posts with mentions or reviews of peft. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-05.

minimal-llama

Posts with mentions or reviews of minimal-llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-21.
  • Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
    16 projects | news.ycombinator.com | 21 Mar 2023
  • Visual ChatGPT
    8 projects | news.ycombinator.com | 9 Mar 2023
    I can't edit my comment now, but it's 30B that needs 18GB of VRAM.

    LLaMA-13B, GPT-3 175B level, only needs 10GB of VRAM with the GPTQ 4bit quantization.

    >do you think there's anything left to trim? like weight pruning, or LoRA, or I dunno, some kind of Huffman coding scheme that lets you mix 4-bit, 2-bit and 1-bit quantizations?

    Absolutely. The GPTQ paper claims negligible output quality loss with 3-bit quantization. The GPTQ-for-LLaMA repo supports 3-bit quantization and inference. So this extra 25% savings is already possible.

    As of right GPTQ-for-LLaMA is using a VRAM hungry attention method. Flash attention will reduce the requirements for 7B to 4GB and possibly fit 30B with a 2048 context window into 16GB, all before stacking 3-bit.

    Pruning is a possibility but I'm not aware of anyone working on it yet.

    LoRa has already been implemented. See https://github.com/zphang/minimal-llama#peft-fine-tuning-wit...

What are some alternatives?

When comparing peft and minimal-llama you can also consider the following projects:

lora - Using Low-rank adaptation to quickly fine-tune diffusion models.

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

visual-chatgpt - Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]

alpaca-lora - Instruct-tune LLaMA on consumer hardware

whisper.cpp - Port of OpenAI's Whisper model in C/C++

dalai - The simplest way to run LLaMA on your local machine

simple-llm-finetuner - Simple UI for LLM Model Finetuning

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ