SaaSHub helps you find the best software and product alternatives Learn more →
Text-generation-inference Alternatives
Similar projects and alternatives to text-generation-inference
-
text-generation-webui
A Gradio web UI for Large Language Models with support for multiple inference backends.
-
Nutrient
Nutrient - The #1 PDF SDK Library. Bad PDFs = bad UX. Slow load times, broken annotations, clunky UX frustrates users. Nutrient’s PDF SDKs gives seamless document experiences, fast rendering, annotations, real-time collaboration, 100+ features. Used by 10K+ devs, serving ~half a billion users worldwide. Explore the SDK for free.
-
-
ollama
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.
-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
-
-
Graal
GraalVM compiles Java applications into native executables that start instantly, scale fast, and use fewer compute resources 🚀
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
-
FLiPStackWeekly
FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
-
Mage
🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
-
exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
-
-
-
-
-
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
-
-
optimum
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
text-generation-inference discussion
text-generation-inference reviews and mentions
-
Best LLM Inference Engines and Servers to Deploy LLMs in Production
GitHub repository: https://github.com/huggingface/text-generation-inference
- FLaNK AI-April 22, 2024
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
I wanted to write that TGI inference engine is not Open Source anymore, but they have reverted the license back to Apache 2.0 for the new version TGI v2.0: https://github.com/huggingface/text-generation-inference/rel...
Good news!
- Hugging Face reverts the license back to Apache 2.0
- HuggingFace text-generation-inference is reverting to Apache 2.0 License
- FLaNK Stack 05 Feb 2024
- Is there any open source app to load a model and expose API like OpenAI?
-
AI Code assistant for about 50-70 users
Setting up a server for multiple users is very different from setting up LLM for yourself. A safe bet would be to just use TGI, which supports continuous batching and is very easy to run via Docker on your server. https://github.com/huggingface/text-generation-inference
-
LocalPilot: Open-source GitHub Copilot on your MacBook
Okay, I actually got local co-pilot set up. You will need these 4 things.
1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.
2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)
3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.
4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.
-
Mistral 7B Paper on ArXiv
A simple microservice would be https://github.com/huggingface/text-generation-inference .
Works flawlessly in Docker on my Windows machine, which is extremely shocking.
-
A note from our sponsor - SaaSHub
www.saashub.com | 17 Feb 2025
Stats
huggingface/text-generation-inference is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of text-generation-inference is Python.
Popular Comparisons
- text-generation-inference VS ollama
- text-generation-inference VS llama-cpp-python
- text-generation-inference VS vllm
- text-generation-inference VS exllama
- text-generation-inference VS FlexLLMGen
- text-generation-inference VS basaran
- text-generation-inference VS mosec
- text-generation-inference VS server
- text-generation-inference VS safetensors
- text-generation-inference VS blog