Introducing llamacpp-for-kobold, run llama.cpp locally with a fancy web UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and more with minimal setup.

This page summarizes the projects mentioned and recommended in the original post on /r/KoboldAI

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • llama.cpp

    LLM inference in C/C++

  • There's an important fix for 65B models upstream: https://github.com/ggerganov/llama.cpp/pull/438/files. I've verified it works on my local copy. Can your fork be updated from upstream? Without it llama will segfault to an under-estimate of the memory required.

  • llamacpp-for-kobold

    Discontinued Port of Facebook's LLaMA model in C/C++ [Moved to: https://github.com/LostRuins/koboldcpp]

  • Enter llamacpp-for-kobold

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • koboldcpp

    Port of Facebook's LLaMA model in C/C++ (by henk717)

  • There's also a single file version, where you just drag-and-drop your llama model onto the .exe file, and connect KoboldAI to the displayed link.

  • TavernAI

    Discontinued TavernAI for nerds [Moved to: https://github.com/Cohee1207/SillyTavern] (by SillyLossy)

  • It looks like some endpoints are missing that flavors of TavernAI depend on. E.g., this promising version of TavernAI needs /config/soft_prompts

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts