Local Alternatives of ChatGPT and Midjourney

This page summarizes the projects mentioned and recommended in the original post on /r/selfhosted

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • text-generation-webui

    A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

  • I'm trying this and it sucks after an hour of playing around with it.

  • serge

    A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.

  • I have not tried it myself yet, but I have put it on my to-do list for the near future, alongside the already mentioned GPT4all, but another project I came across on reddit is Serge: https://github.com/nsarrazin/serge

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • llama.cpp

    LLM inference in C/C++

  • LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen

  • lollms-webui

    Lord of Large Language Models Web User Interface

  • sd-webui-lobe-theme

    🅰️ Lobe theme - The modern theme for stable diffusion webui, exquisite interface design, highly customizable UI, and efficiency boosting features.

  • stable-diffusion-webui

    Stable Diffusion web UI

  • InvokaAI is an alternative to AUTOMATIC1111's stable-diffusion-webui as a front-end for Stable Diffusion and both should be able to run on a RTX4000. The base model aren't the easiest to get the best results, but you will find many alternatives models on https://civitai.com/ that can all be used with the webui.

  • InvokeAI

    InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.

  • InvokaAI is an alternative to AUTOMATIC1111's stable-diffusion-webui as a front-end for Stable Diffusion and both should be able to run on a RTX4000. The base model aren't the easiest to get the best results, but you will find many alternatives models on https://civitai.com/ that can all be used with the webui.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • civitai

    A repository of models, textual inversions, and more

  • InvokaAI is an alternative to AUTOMATIC1111's stable-diffusion-webui as a front-end for Stable Diffusion and both should be able to run on a RTX4000. The base model aren't the easiest to get the best results, but you will find many alternatives models on https://civitai.com/ that can all be used with the webui.

  • Open-Assistant

    OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

  • I believe open-assistant is able to be ran locally, but it’s currently still in an early phase. Probably best to wait a month or two more for it to get better.

  • stable-diffusion-webui-docker

    Easy Docker setup for Stable Diffusion with user-friendly UI

  • stanford_alpaca

    Code and documentation to train Stanford's Alpaca models, and generate the data.

  • Yep, there are a lot of LLaMa models available, some are very good, but of course require more resources. Many of them are capable of taking on GPT-4. Stanford's Alpaca is the one I've mostly seen talked about, but I'm not sure if it is necessarily the best option.

  • pythia

    The hub for EleutherAI's work on interpretability and learning dynamics

  • LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen

  • RWKV-LM

    RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

  • LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen

  • FlexGen

    Running large language models on a single GPU for throughput-oriented scenarios.

  • LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts