Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure. Learn more →
FastChat Alternatives
Similar projects and alternatives to FastChat
-
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
-
ollama
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.
-
-
transformers
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
-
-
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
-
-
-
LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
-
-
-
FLiPStackWeekly
FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
-
-
litellm
Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
-
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
-
OpenLLM
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
FastChat discussion
FastChat reviews and mentions
-
Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac
Hey, Simon! Have you considered to host private evals yourself? I think, with the weight of the community behind you, you could easily accumulate a bunch of really high-quality, "curated" data, if you will. That is to say, people would happily send it to you. More people should self-host stuff like https://github.com/lm-sys/FastChat without revealing their dataset, I think, and people would probably trust it much more than the public stuff, considering they already trust _you_ to some extent! So far the private eval scene is just a handful of guys on twitter reporting their findings in unsystematic manner, but a real grassroots approach backed up by a respectable influencer would go a long way to change that.
Food for thought.
-
DoLa and MT-Bench - A Quick Eval of a new LLM trick
Made a change to (gen_model_answer.py)[https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/gen_model_answer.py] adding the dola_layers params
-
MT-Bench: Comparing different LLM Judges
MT-Bench is a quick (and dirty?) way to evaluate a chatbot model (fine-tuned instruction following LLM). When a new open-source model is published at Hugging-face it is not uncommon to see the score presented as a testament of quality. It offers ~$5 worth of OpenAI API calls towards getting a good ballpark of how your model does. A good tool to iterate on fine-tuning an assistant model.
-
GPT4.5 or GPT5 being tested on LMSYS?
gpt2-chatbot isn't the only "mystery model" on LMSYS. Another is "deluxe-chat".
When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"
One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is invoked
[0] https://github.com/lm-sys/FastChat/issues/2527
-
LLMs on your local Computer (Part 1)
FastChat
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- ChatGPT for Teams
- FastChat: An open platform for training and serving large language models
-
LM Studio – Discover, download, and run local LLMs
How does it compare with something like FastChat? https://github.com/lm-sys/FastChat
Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)
-
A note from our sponsor - Stream
getstream.io | 10 Jul 2025
Stats
lm-sys/FastChat is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of FastChat is Python.