Anyone hosting a local LLM server

This page summarizes the projects mentioned and recommended in the original post on /r/Oobabooga

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • text-generation-webui

    A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

  • These flags and a few other can be found at https://github.com/oobabooga/text-generation-webui i the Gradio section.

  • chat-llama-discord-bot

    A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, or any other LLM supported by text-generation-webui or llama.cpp. (by mercm8)

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • oobabot

    A Discord bot which talks to Large Language Model AIs running on oobabooga's text-generation-webui

  • SillyTavern

    Discontinued LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern] (by Cohee1207)

  • I'm running an essentially "headless" setup with SillyTavern, SillyTavern-extras, ooba and vlandmadic's Stable Diffusion fork on a WSL(2) VM (Ubuntu 22.04), where:

  • SillyTavern-extras

    Discontinued Extensions API for SillyTavern [Moved to: https://github.com/SillyTavern/SillyTavern-extras] (by Cohee1207)

  • I'm running an essentially "headless" setup with SillyTavern, SillyTavern-extras, ooba and vlandmadic's Stable Diffusion fork on a WSL(2) VM (Ubuntu 22.04), where:

  • automatic

    SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models

  • I'm running an essentially "headless" setup with SillyTavern, SillyTavern-extras, ooba and vlandmadic's Stable Diffusion fork on a WSL(2) VM (Ubuntu 22.04), where:

  • triton

    Development repository for the Triton language and compiler

  • I'm pretty happy with the setup, because it allows me to keep all the AI stuff and its dozens of conda envs and repos etc. seperate from my normal setup and "portable". It may have some performance impact (although I don't personally notice any significant difference to running it "natively" on windows), and it may enable some extra functionality, such as access to OpenAi's Triton etc., but that's currently neither here nor there.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts