inference-benchmark VS text-generation-inference

Compare inference-benchmark vs text-generation-inference and see what are their differences.

inference-benchmark

Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper) (by tensorchord)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
inference-benchmark text-generation-inference
1 29
26 7,995
- 7.5%
6.4 9.6
11 months ago 7 days ago
Python Python
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

inference-benchmark

Posts with mentions or reviews of inference-benchmark. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-05.

text-generation-inference

Posts with mentions or reviews of text-generation-inference. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-22.

What are some alternatives?

When comparing inference-benchmark and text-generation-inference you can also consider the following projects:

agentchain - Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks

llama-cpp-python - Python bindings for llama.cpp

ChatFred - Alfred workflow using ChatGPT, DALL·E 2 and other models for chatting, image generation and more.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

mosec - A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine

exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

inference - Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.

basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.

truss - The simplest way to serve AI/ML models in production

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

safetensors - Simple, safe way to store and distribute tensors