FlexGen VS llama-cpu

Compare FlexGen vs llama-cpu and see what are their differences.

FlexGen

Running large language models on a single GPU for throughput-oriented scenarios. (by FMInference)

llama-cpu

Fork of Facebooks LLaMa model to run on CPU (by markasoftware)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
FlexGen llama-cpu
39 9
9,007 775
0.8% -
3.0 3.1
15 days ago about 1 year ago
Python Python
Apache License 2.0 GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

FlexGen

Posts with mentions or reviews of FlexGen. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-03.

llama-cpu

Posts with mentions or reviews of llama-cpu. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-08.

What are some alternatives?

When comparing FlexGen and llama-cpu you can also consider the following projects:

llama - Inference code for Llama models

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

text-generation-inference - Large Language Model Text Generation Inference

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

whisper.cpp - Port of OpenAI's Whisper model in C/C++

wrapyfi-examples_llama - Inference code for facebook LLaMA models with Wrapyfi support

DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

bitsandbytes-win-prebuilt

audiolm-pytorch - Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.