koboldcpp
KoboldAI
koboldcpp | KoboldAI | |
---|---|---|
180 | 41 | |
3,817 | 327 | |
- | - | |
10.0 | 9.5 | |
2 days ago | 16 days ago | |
C++ | C++ | |
GNU Affero General Public License v3.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
koboldcpp
- Any Online Communities on Local/Home AI?
- Koboldcpp-1.62.1 adds support for Command-R+
- Show HN: I made an app to use local AI as daily driver
-
Easiest way to show my model to my mom?
FYI this is the easiest way to host on the horde: https://github.com/LostRuins/koboldcpp
- IT Veteran... why am I struggling with all of this?
- What do you use to run your models?
- ByteDance AI researcher suggests that open source model more powerful than Gemini to be released soon
- i need some help guys
-
[Guide] How install KoboldAI in Android via Termux (Update 04-12-2023)
For more information of Koboldcpp look this guide: https://github.com/LostRuins/koboldcpp/wiki
-
SillyTavern 1.10.10 has been released
Out of curiosity, is there a specific reason for this? The most popular fork KoboldCpp is in active development, and was the first to adopt the Min P sampler, and even distincts itself with the context shift feature. Just wondering what this means for the future. Thanks!
KoboldAI
-
LLM spews nonsense in CVE report for curl
It’s not that big a task as all that. There are a lot of unaligned models available, and user interfaces that aren’t that hard to use.
https://github.com/henk717/KoboldAI
-
Chat with, and help host, a free community LLM "horde"
https://github.com/henk717/KoboldAI
- Hosts pick a quantized community LLM to run, which is (IMO) the real magic of this system. Cloud services tend to run generic Llama chat/instruct models, OpenAI API models, or maybe a single proprietary finetune, but the Llama/Mistral finetuning community is red hot. New finetines and crazy merges/hybrids that outperform llama-chat in specific tasks (mostly Chat/Story/RP) come out every day, and each one has a different "flavor" and format:
https://huggingface.co/models?sort=modified&search=mistral+g...
- Run LLMs with KoboldaAI on Intel ARC
-
No idea what I'm doing help
Sourceforge is our official version but that one is to old to run newer models like Holomax, the releases for United can be found here : https://github.com/henk717/KoboldAI/releases
-
Still getting "read only" on JanitorAI even after setting model. Do I need to change anything config wise to get it to use pygmalion?
Colab Check: False, TPU: False INIT | OK | KAI Horde Models INFO | __main__::648 - We loaded the following model backends: KoboldAI API KoboldAI Old Colab Method Huggingface GooseAI Horde OpenAI Read Only INFO | __main__:general_startup:1363 - Running on Repo: https://github.com/henk717/koboldai Branch: INIT | Starting | Flask INIT | OK | Flask INIT | Starting | Webserver INIT | OK | Webserver MESSAGE | Webserver started! You may now connect with a browser at http://127.0.0.1:8501 INIT | Searching | GPU support INIT | Found | GPU support INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Traceback (most recent call last): File "B:\python\lib\site-packages\eventlet\hubs\selects.py", line 59, in wait listeners.get(fileno, hub.noop).cb(fileno) File "B:\python\lib\site-packages\eventlet\greenthread.py", line 221, in main result = function(*args, **kwargs) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 837, in process_request proto.__init__(conn_state, self) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 352, in __init__ self.finish() File "B:\python\lib\site-packages\eventlet\wsgi.py", line 751, in finish BaseHTTPServer.BaseHTTPRequestHandler.finish(self) File "B:\python\lib\socketserver.py", line 811, in finish self.wfile.close() File "B:\python\lib\socket.py", line 687, in write return self._sock.send(b) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 401, in send return self._send_loop(self.fd.send, data, flags) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 388, in _send_loop return send_method(data, *args) ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine Removing descriptor: 1488 Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1 TODO: Allow config INFO | modeling.inference_models.hf:set_input_parameters:189 - {'0_Layers': 18, 'CPU_Layers': 10, 'Disk_Layers': 0, 'class': 'model', 'label': 'PygmalionAI_pygmalion-6b', 'id': 'PygmalionAI_pygmalion-6b', 'name': 'PygmalionAI_pygmalion-6b', 'size': '', 'menu': 'Custom', 'path': 'C:\\KoboldAI\\models\\PygmalionAI_pygmalion-6b', 'ismenu': 'false', 'plugin': 'Huggingface'} INIT | Searching | GPU support INIT | Found | GPU support Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:19<00:00, 9.60s/it] Loading model tensors: 100%|##########| 56/56 [00:05<00:00, 9.52it/s]INIT | Starting | LUA bridge0, 8.93s/it] INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1
-
Kobold API URL for Chub Venus Ai
That is our developer version, its selectable in the Colab version dropdown and also available on https://github.com/henk717/koboldai
-
I got KoboldAI running on my computer and successfully connected it to Janitor, heres a small tutorial
Download Kobold from THIS LINK:https://github.com/henk717/KoboldAI. I downloaded Kobold from a different Github link and it wouldnt work, you need to get this specific one. Click on "code", then download zip
-
I created a repo on Github to categorize AI models. You can browse AIs from many categories!
https://github.com/henk717/KoboldAI https://github.com/LostRuins/koboldcpp/ https://github.com/ggerganov/llama.cpp https://github.com/AUTOMATIC1111/stable-diffusion-webui https://github.com/oobabooga/text-generation-webui
-
Meta’s new AI lets people make chatbots. They’re using it for sex.
For the third, I don't think Oobabooga supports the horde but KoboldAI does. I won't go into how to install KoboldAI since Oobabooga should give you enough freedom with 7B, 13B and maybe 30B models (depending on available RAM), but KoboldAI lets you download some models directly from the web interface, supports using online service providers to run the models for you, and supports the horde with a list of available models to choose from.
-
Kobold AI broke after update (New to this)
"Your Pytorch installation did not update correctly, you can solve this by running install_requirements.bat in the mode where it deletes the existing runtime. Alternative you can download a fresh copy of the offline installer for KoboldAI United from : https://github.com/henk717/KoboldAI/releases"
What are some alternatives?
KoboldAI
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
KoboldAI-Client
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
stable-diffusion-webui - Stable Diffusion web UI
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
llama.cpp - LLM inference in C/C++
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.