FlexGen VS minimal-llama

Compare FlexGen vs minimal-llama and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
FlexGen minimal-llama
39 4
9,007 456
0.8% -
3.0 8.5
15 days ago 7 months ago
Python Python
Apache License 2.0 -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

FlexGen

Posts with mentions or reviews of FlexGen. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-03.

minimal-llama

Posts with mentions or reviews of minimal-llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-21.
  • Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
    16 projects | news.ycombinator.com | 21 Mar 2023
  • Visual ChatGPT
    8 projects | news.ycombinator.com | 9 Mar 2023
    I can't edit my comment now, but it's 30B that needs 18GB of VRAM.

    LLaMA-13B, GPT-3 175B level, only needs 10GB of VRAM with the GPTQ 4bit quantization.

    >do you think there's anything left to trim? like weight pruning, or LoRA, or I dunno, some kind of Huffman coding scheme that lets you mix 4-bit, 2-bit and 1-bit quantizations?

    Absolutely. The GPTQ paper claims negligible output quality loss with 3-bit quantization. The GPTQ-for-LLaMA repo supports 3-bit quantization and inference. So this extra 25% savings is already possible.

    As of right GPTQ-for-LLaMA is using a VRAM hungry attention method. Flash attention will reduce the requirements for 7B to 4GB and possibly fit 30B with a 2048 context window into 16GB, all before stacking 3-bit.

    Pruning is a possibility but I'm not aware of anyone working on it yet.

    LoRa has already been implemented. See https://github.com/zphang/minimal-llama#peft-fine-tuning-wit...

What are some alternatives?

When comparing FlexGen and minimal-llama you can also consider the following projects:

llama - Inference code for Llama models

visual-chatgpt - Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [Moved to: https://github.com/microsoft/TaskMatrix]

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

whisper.cpp - Port of OpenAI's Whisper model in C/C++

text-generation-inference - Large Language Model Text Generation Inference

simple-llm-finetuner - Simple UI for LLM Model Finetuning

alpaca-lora - Instruct-tune LLaMA on consumer hardware

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

audiolm-pytorch - Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch