FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena. (by lm-sys)

FastChat Alternatives

Similar projects and alternatives to FastChat

  1. llama.cpp

    LLM inference in C/C++

  2. Stream

    Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.

    Stream logo
  3. text-generation-webui

    LLM UI with advanced features, easy setup, and multiple backend support.

  4. ollama

    Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.

  5. Pytorch

    393 FastChat VS Pytorch

    Tensors and Dynamic neural networks in Python with strong GPU acceleration

  6. transformers

    217 FastChat VS transformers

    🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

  7. llama

    190 FastChat VS llama

    Inference code for Llama models

  8. koboldcpp

    Run GGUF models easily with a KoboldAI UI. One File. Zero Install.

  9. InfluxDB

    InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
  10. gpt4all

    GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

  11. stanford_alpaca

    Code and documentation to train Stanford's Alpaca models, and generate the data.

  12. alpaca-lora

    107 FastChat VS alpaca-lora

    Instruct-tune LLaMA on consumer hardware

  13. LocalAI

    :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference

  14. mlc-llm

    90 FastChat VS mlc-llm

    Universal LLM Deployment Engine with ML Compilation

  15. alpaca.cpp

    Discontinued Locally run an Instruction-Tuned Chat-Style LLM

  16. FLiPStackWeekly

    FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...

  17. llama-cpp-python

    Python bindings for llama.cpp

  18. litellm

    52 FastChat VS litellm

    Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]

  19. open_llama

    OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset

  20. OpenLLM

    28 FastChat VS OpenLLM

    Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.

  21. CASALIOY

    6 FastChat VS CASALIOY

    ♾️ toolkit for air-gapped LLMs on consumer-grade hardware

  22. agentflow

    Complex LLM Workflows from Simple JSON.

  23. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better FastChat alternative or higher similarity.

FastChat discussion

Log in or Post with

FastChat reviews and mentions

Posts with mentions or reviews of FastChat. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-11-13.
  • Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac
    5 projects | news.ycombinator.com | 13 Nov 2024
    Hey, Simon! Have you considered to host private evals yourself? I think, with the weight of the community behind you, you could easily accumulate a bunch of really high-quality, "curated" data, if you will. That is to say, people would happily send it to you. More people should self-host stuff like https://github.com/lm-sys/FastChat without revealing their dataset, I think, and people would probably trust it much more than the public stuff, considering they already trust _you_ to some extent! So far the private eval scene is just a handful of guys on twitter reporting their findings in unsystematic manner, but a real grassroots approach backed up by a respectable influencer would go a long way to change that.

    Food for thought.

  • DoLa and MT-Bench - A Quick Eval of a new LLM trick
    3 projects | dev.to | 11 Jul 2024
    Made a change to (gen_model_answer.py)[https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/gen_model_answer.py] adding the dola_layers params
  • MT-Bench: Comparing different LLM Judges
    2 projects | dev.to | 8 Jun 2024
    MT-Bench is a quick (and dirty?) way to evaluate a chatbot model (fine-tuned instruction following LLM). When a new open-source model is published at Hugging-face it is not uncommon to see the score presented as a testament of quality. It offers ~$5 worth of OpenAI API calls towards getting a good ballpark of how your model does. A good tool to iterate on fine-tuning an assistant model.
  • GPT4.5 or GPT5 being tested on LMSYS?
    3 projects | news.ycombinator.com | 29 Apr 2024
    gpt2-chatbot isn't the only "mystery model" on LMSYS. Another is "deluxe-chat".

    When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"

    One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is invoked

    [0] https://github.com/lm-sys/FastChat/issues/2527

  • LLMs on your local Computer (Part 1)
    7 projects | dev.to | 11 Mar 2024
    FastChat
  • FLaNK AI for 11 March 2024
    46 projects | dev.to | 11 Mar 2024
  • FLaNK 04 March 2024
    26 projects | dev.to | 4 Mar 2024
  • ChatGPT for Teams
    2 projects | news.ycombinator.com | 11 Jan 2024
  • FastChat: An open platform for training and serving large language models
    1 project | news.ycombinator.com | 24 Dec 2023
  • LM Studio – Discover, download, and run local LLMs
    17 projects | news.ycombinator.com | 22 Nov 2023
    How does it compare with something like FastChat? https://github.com/lm-sys/FastChat

    Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)

  • A note from our sponsor - Stream
    getstream.io | 10 Jul 2025
    Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure. Learn more →

Stats

Basic FastChat repo stats
86
38,807
7.8
about 1 month ago

Sponsored
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video.
Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
getstream.io