KoboldAI VS llama.cpp

Compare KoboldAI vs llama.cpp and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
KoboldAI llama.cpp
41 769
327 56,891
- -
9.5 10.0
14 days ago 3 days ago
C++ C++
GNU Affero General Public License v3.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

KoboldAI

Posts with mentions or reviews of KoboldAI. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-02.
  • LLM spews nonsense in CVE report for curl
    3 projects | news.ycombinator.com | 2 Jan 2024
    It’s not that big a task as all that. There are a lot of unaligned models available, and user interfaces that aren’t that hard to use.

    https://github.com/henk717/KoboldAI

  • Chat with, and help host, a free community LLM "horde"
    2 projects | news.ycombinator.com | 9 Oct 2023
    https://github.com/henk717/KoboldAI

    - Hosts pick a quantized community LLM to run, which is (IMO) the real magic of this system. Cloud services tend to run generic Llama chat/instruct models, OpenAI API models, or maybe a single proprietary finetune, but the Llama/Mistral finetuning community is red hot. New finetines and crazy merges/hybrids that outperform llama-chat in specific tasks (mostly Chat/Story/RP) come out every day, and each one has a different "flavor" and format:

    https://huggingface.co/models?sort=modified&search=mistral+g...

  • Run LLMs with KoboldaAI on Intel ARC
    1 project | /r/IntelArc | 12 Sep 2023
  • No idea what I'm doing help
    2 projects | /r/KoboldAI | 1 Sep 2023
    Sourceforge is our official version but that one is to old to run newer models like Holomax, the releases for United can be found here : https://github.com/henk717/KoboldAI/releases
  • Still getting "read only" on JanitorAI even after setting model. Do I need to change anything config wise to get it to use pygmalion?
    1 project | /r/KoboldAI | 8 Jul 2023
    Colab Check: False, TPU: False INIT | OK | KAI Horde Models INFO | __main__::648 - We loaded the following model backends: KoboldAI API KoboldAI Old Colab Method Huggingface GooseAI Horde OpenAI Read Only INFO | __main__:general_startup:1363 - Running on Repo: https://github.com/henk717/koboldai Branch: INIT | Starting | Flask INIT | OK | Flask INIT | Starting | Webserver INIT | OK | Webserver MESSAGE | Webserver started! You may now connect with a browser at http://127.0.0.1:8501 INIT | Searching | GPU support INIT | Found | GPU support INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Traceback (most recent call last): File "B:\python\lib\site-packages\eventlet\hubs\selects.py", line 59, in wait listeners.get(fileno, hub.noop).cb(fileno) File "B:\python\lib\site-packages\eventlet\greenthread.py", line 221, in main result = function(*args, **kwargs) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 837, in process_request proto.__init__(conn_state, self) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 352, in __init__ self.finish() File "B:\python\lib\site-packages\eventlet\wsgi.py", line 751, in finish BaseHTTPServer.BaseHTTPRequestHandler.finish(self) File "B:\python\lib\socketserver.py", line 811, in finish self.wfile.close() File "B:\python\lib\socket.py", line 687, in write return self._sock.send(b) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 401, in send return self._send_loop(self.fd.send, data, flags) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 388, in _send_loop return send_method(data, *args) ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine Removing descriptor: 1488 Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1 TODO: Allow config INFO | modeling.inference_models.hf:set_input_parameters:189 - {'0_Layers': 18, 'CPU_Layers': 10, 'Disk_Layers': 0, 'class': 'model', 'label': 'PygmalionAI_pygmalion-6b', 'id': 'PygmalionAI_pygmalion-6b', 'name': 'PygmalionAI_pygmalion-6b', 'size': '', 'menu': 'Custom', 'path': 'C:\\KoboldAI\\models\\PygmalionAI_pygmalion-6b', 'ismenu': 'false', 'plugin': 'Huggingface'} INIT | Searching | GPU support INIT | Found | GPU support Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:19<00:00, 9.60s/it] Loading model tensors: 100%|##########| 56/56 [00:05<00:00, 9.52it/s]INIT | Starting | LUA bridge0, 8.93s/it] INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1
  • Kobold API URL for Chub Venus Ai
    1 project | /r/KoboldAI | 2 Jul 2023
    That is our developer version, its selectable in the Colab version dropdown and also available on https://github.com/henk717/koboldai
  • I got KoboldAI running on my computer and successfully connected it to Janitor, heres a small tutorial
    1 project | /r/JanitorAI_Official | 1 Jul 2023
    Download Kobold from THIS LINK:https://github.com/henk717/KoboldAI. I downloaded Kobold from a different Github link and it wouldnt work, you need to get this specific one. Click on "code", then download zip
  • I created a repo on Github to categorize AI models. You can browse AIs from many categories!
    6 projects | /r/InternetIsBeautiful | 30 Jun 2023
    https://github.com/henk717/KoboldAI https://github.com/LostRuins/koboldcpp/ https://github.com/ggerganov/llama.cpp https://github.com/AUTOMATIC1111/stable-diffusion-webui https://github.com/oobabooga/text-generation-webui
  • Meta’s new AI lets people make chatbots. They’re using it for sex.
    4 projects | /r/LocalLLaMA | 26 Jun 2023
    For the third, I don't think Oobabooga supports the horde but KoboldAI does. I won't go into how to install KoboldAI since Oobabooga should give you enough freedom with 7B, 13B and maybe 30B models (depending on available RAM), but KoboldAI lets you download some models directly from the web interface, supports using online service providers to run the models for you, and supports the horde with a list of available models to choose from.
  • Kobold AI broke after update (New to this)
    2 projects | /r/KoboldAI | 7 Jun 2023
    "Your Pytorch installation did not update correctly, you can solve this by running install_requirements.bat in the mode where it deletes the existing runtime. Alternative you can download a fresh copy of the offline installer for KoboldAI United from : https://github.com/henk717/KoboldAI/releases"

llama.cpp

Posts with mentions or reviews of llama.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-21.
  • Phi-3 Weights Released
    1 project | news.ycombinator.com | 23 Apr 2024
    well https://github.com/ggerganov/llama.cpp/issues/6849
  • Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
    3 projects | news.ycombinator.com | 21 Apr 2024
  • Llama.cpp Working on Support for Llama3
    1 project | news.ycombinator.com | 18 Apr 2024
  • Embeddings are a good starting point for the AI curious app developer
    7 projects | news.ycombinator.com | 17 Apr 2024
    Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)

    Running an embedding server locally is pretty straightforward:

    - Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases

  • Mixtral 8x22B
    4 projects | news.ycombinator.com | 17 Apr 2024
  • Llama.cpp: Improve CPU prompt eval speed
    1 project | news.ycombinator.com | 17 Apr 2024
  • Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
    9 projects | news.ycombinator.com | 17 Apr 2024
    Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.

    As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.

    [0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors

  • KodiBot - Local Chatbot App for Desktop
    2 projects | dev.to | 11 Apr 2024
    KodiBot is a desktop app that enables users to run their own AI chat assistants locally and offline on Windows, Mac, and Linux operating systems. KodiBot is a standalone app and does not require an internet connection or additional dependencies to run local chat assistants. It supports both Llama.cpp compatible models and OpenAI API.
  • Mixture-of-Depths: Dynamically allocating compute in transformers
    3 projects | news.ycombinator.com | 8 Apr 2024
    There are already some implementations out there which attempt to accomplish this!

    Here's an example: https://github.com/silphendio/sliced_llama

    A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...

    Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275

    And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...

  • The lifecycle of a code AI completion
    6 projects | news.ycombinator.com | 7 Apr 2024
    For those who might not be aware of this, there is also an open source project on GitHub called "Twinny" which is an offline Visual Studio Code plugin equivalent to Copilot: https://github.com/rjmacarthy/twinny

    It can be used with a number of local model services. Currently for my setup on a NVIDIA 4090, I'm running both the base and instruct model for deepseek-coder 6.7b using 5_K_M Quantization GGUF files (for performance) through llama.cpp "server" where the base model is for completions and the instruct model for chat interactions.

    llama.cpp: https://github.com/ggerganov/llama.cpp/

    deepseek-coder 6.7b base GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GGU...

    deepseek-coder 6.7b instruct GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct...

What are some alternatives?

When comparing KoboldAI and llama.cpp you can also consider the following projects:

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

KoboldAI-Client

gpt4all - gpt4all: run open-source LLMs anywhere

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

KoboldAI

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

stable-diffusion-webui - Stable Diffusion web UI

ggml - Tensor library for machine learning

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM