FastChat VS litellm

Compare FastChat vs litellm and see what are their differences.

FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena. (by lm-sys)

litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs) (by BerriAI)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
FastChat litellm
82 28
33,877 8,225
5.4% 30.4%
9.6 10.0
about 14 hours ago 2 days ago
Python Python
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

FastChat

Posts with mentions or reviews of FastChat. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-11.

litellm

Posts with mentions or reviews of litellm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-05.
  • Anthropic launches Tool Use (function calling)
    3 projects | news.ycombinator.com | 5 Apr 2024
    There are a few libs that already abstract this away, for example:

    - https://github.com/BerriAI/litellm

    - https://jxnl.github.io/instructor/

    - langchain

    It's not hard for me to imagine a future where there is something like the CNCF for AI models, tools, and infra.

  • Ask HN: Python Meta-Client for OpenAI, Anthropic, Gemini LLM and other API-s?
    1 project | news.ycombinator.com | 7 Mar 2024
    Hey, are you just looking for litellm - https://github.com/BerriAI/litellm

    context - i'm the repo maintainer

  • Voxos.ai – An Open-Source Desktop Voice Assistant
    7 projects | news.ycombinator.com | 19 Jan 2024
    It should be possible using LiteLLM and a patch or a proxy.

    https://github.com/BerriAI/litellm

  • Show HN: Talk to any ArXiv paper just by changing the URL
    5 projects | news.ycombinator.com | 20 Dec 2023
  • Integrate LLM Frameworks
    5 projects | dev.to | 10 Dec 2023
    This article will demonstrate how txtai can integrate with llama.cpp, LiteLLM and custom generation methods. For custom generation, we'll show how to run inference with a Mamba model.
  • Is there any open source app to load a model and expose API like OpenAI?
    5 projects | /r/LocalLLaMA | 9 Dec 2023
    I use this with ollama and works perfectly https://github.com/BerriAI/litellm
  • OpenAI Switch Kit: Swap OpenAI with any open-source model
    5 projects | news.ycombinator.com | 6 Dec 2023
    Another abstraction layer library is: https://github.com/BerriAI/litellm

    For me the killer feature of a library like this would be if it implemented function calling. Even if it was for a very restricted grammar - like the traditional ReAct prompt:

      Solve a question answering task with interleaving Thought, Action, Observation   usteps. Thought can reason about the current situation, and Action can be three types:
  • LibreChat
    9 projects | news.ycombinator.com | 2 Dec 2023
  • LM Studio – Discover, download, and run local LLMs
    17 projects | news.ycombinator.com | 22 Nov 2023
  • Please!!! Help me!!!! Open Interpreter. Chatgpt-4. Mac, Terminals.
    1 project | /r/OPENINTERPRETER | 21 Nov 2023
    Welcome to Open Interpreter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ OpenAI API key not found To use GPT-4 (recommended) please provide an OpenAI API key. To use Code-Llama (free but less capable) press enter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── OpenAI API key: [the API Key I inputed] Tip: To save this key for later, run export OPENAI_API_KEY=your_api_key on Mac/Linux or setx OPENAI_API_KEY your_api_key on Windows. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ Model set to GPT-4 Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > export OPENAI_API_KEY=your_api_key Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/bin/interpreter", line 8, in sys.exit(cli()) ^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 22, in cli cli(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/cli/cli.py", line 254, in cli interpreter.chat() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 76, in chat for _ in self._streaming_chat(message=message, display=display): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 97, in _streaming_chat yield from terminal_interface(self, message) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/terminal_interface/terminal_interface.py", line 62, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 105, in _streaming_chat yield from self._respond() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 131, in _respond yield from respond(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/respond.py", line 61, in respond for chunk in interpreter._llm(messages_for_llm): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/llm/setup_openai_coding_llm.py", line 94, in coding_llm response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 792, in wrapper raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 751, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 53, in wrapper result = future.result(timeout=local_timeout_duration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 42, in async_func return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 1183, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2959, in exception_type raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2355, in exception_type raise original_exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 441, in completion raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 423, in completion response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.

What are some alternatives?

When comparing FastChat and litellm you can also consider the following projects:

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

llama.cpp - LLM inference in C/C++

LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

gpt4all - gpt4all: run open-source LLMs anywhere

dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

libsql - libSQL is a fork of SQLite that is both Open Source, and Open Contributions.

llama-cpp-python - Python bindings for llama.cpp

FLiPStackWeekly - FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...