KoboldAI-Client
llama.cpp
KoboldAI-Client | llama.cpp | |
---|---|---|
185 | 772 | |
3,344 | 56,891 | |
- | - | |
6.3 | 10.0 | |
about 2 months ago | 4 days ago | |
Python | C++ | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KoboldAI-Client
- No idea what I'm doing help
-
ChatGPT users drop for the first time as people turn to uncensored chatbots
You can use KoboldAI to run a LLM locally. There are hundreds / thousands of models on hugging face. Some uncensored ones are Pygmalion AI (chatbot), Erebus (story writing AI), or Vicuna (general purpose).
-
Tips for using Kobold with Venus? I am pretty new at everything.
GPT-J 6B is a pretty weak and outdated model. Nerys 13B would probably give you better replies but they lean more towards SFW stuff. Erebus was their best model for erotic roleplay but they removed it as it went against Google's TOS. You can check out their documentation here.
-
I can't do this y'all
If you do have that kind of hardware, the next step would be looking for what model to run. I came across Kobold's models. Their main github page is here: https://github.com/KoboldAI/KoboldAI-Client
-
Question regarding model compatibility for Alpaca Turbo
Then there are graphical user interfaces like text-generation-webui and gpt4all for general purpose chat. There are also KoboldAI and SillyTavern, they have focus more on storytelling and roleplay and have tools to improve that.
-
Running Multiple AI Models Sequentially for a Conversation on a Single GPU
And finally the folks from the KoboldAi do some interesting stuff with Pseudocode and Soft-Prompts that might also be relevant.
- Summoning Life-Size Characters to Your Room: New Update for my Mixed Reality App!
- Feels like the censorship has gotten tighter recently, just me?
-
How to get a KoboldAI URL API key!
Click this link. ---> https://github.com/KoboldAI/KoboldAI-Client/tree/main
-
Difficulties installing Pygmalion 13b
Do you believe the problem could be that my KoboldAI is outdated? I did download the one from henk717 at https://github.com/KoboldAI/KoboldAI-Client but it was a little while ago.
llama.cpp
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
-
Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.
As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.
[0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors
What are some alternatives?
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
gpt4all - gpt4all: run open-source LLMs anywhere
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
KoboldAI
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
Clover-Edition - State of the art AI plays dungeon master to your adventures.
ggml - Tensor library for machine learning
stable-diffusion-webui - Stable Diffusion web UI
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM