KoboldAI
langchain
Our great sponsors
KoboldAI | langchain | |
---|---|---|
41 | 152 | |
327 | 56,526 | |
- | - | |
9.5 | 10.0 | |
14 days ago | 9 months ago | |
C++ | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KoboldAI
-
LLM spews nonsense in CVE report for curl
It’s not that big a task as all that. There are a lot of unaligned models available, and user interfaces that aren’t that hard to use.
https://github.com/henk717/KoboldAI
-
Chat with, and help host, a free community LLM "horde"
https://github.com/henk717/KoboldAI
- Hosts pick a quantized community LLM to run, which is (IMO) the real magic of this system. Cloud services tend to run generic Llama chat/instruct models, OpenAI API models, or maybe a single proprietary finetune, but the Llama/Mistral finetuning community is red hot. New finetines and crazy merges/hybrids that outperform llama-chat in specific tasks (mostly Chat/Story/RP) come out every day, and each one has a different "flavor" and format:
https://huggingface.co/models?sort=modified&search=mistral+g...
- Run LLMs with KoboldaAI on Intel ARC
-
No idea what I'm doing help
Sourceforge is our official version but that one is to old to run newer models like Holomax, the releases for United can be found here : https://github.com/henk717/KoboldAI/releases
-
Still getting "read only" on JanitorAI even after setting model. Do I need to change anything config wise to get it to use pygmalion?
Colab Check: False, TPU: False INIT | OK | KAI Horde Models INFO | __main__::648 - We loaded the following model backends: KoboldAI API KoboldAI Old Colab Method Huggingface GooseAI Horde OpenAI Read Only INFO | __main__:general_startup:1363 - Running on Repo: https://github.com/henk717/koboldai Branch: INIT | Starting | Flask INIT | OK | Flask INIT | Starting | Webserver INIT | OK | Webserver MESSAGE | Webserver started! You may now connect with a browser at http://127.0.0.1:8501 INIT | Searching | GPU support INIT | Found | GPU support INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Traceback (most recent call last): File "B:\python\lib\site-packages\eventlet\hubs\selects.py", line 59, in wait listeners.get(fileno, hub.noop).cb(fileno) File "B:\python\lib\site-packages\eventlet\greenthread.py", line 221, in main result = function(*args, **kwargs) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 837, in process_request proto.__init__(conn_state, self) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 352, in __init__ self.finish() File "B:\python\lib\site-packages\eventlet\wsgi.py", line 751, in finish BaseHTTPServer.BaseHTTPRequestHandler.finish(self) File "B:\python\lib\socketserver.py", line 811, in finish self.wfile.close() File "B:\python\lib\socket.py", line 687, in write return self._sock.send(b) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 401, in send return self._send_loop(self.fd.send, data, flags) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 388, in _send_loop return send_method(data, *args) ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine Removing descriptor: 1488 Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1 TODO: Allow config INFO | modeling.inference_models.hf:set_input_parameters:189 - {'0_Layers': 18, 'CPU_Layers': 10, 'Disk_Layers': 0, 'class': 'model', 'label': 'PygmalionAI_pygmalion-6b', 'id': 'PygmalionAI_pygmalion-6b', 'name': 'PygmalionAI_pygmalion-6b', 'size': '', 'menu': 'Custom', 'path': 'C:\\KoboldAI\\models\\PygmalionAI_pygmalion-6b', 'ismenu': 'false', 'plugin': 'Huggingface'} INIT | Searching | GPU support INIT | Found | GPU support Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:19<00:00, 9.60s/it] Loading model tensors: 100%|##########| 56/56 [00:05<00:00, 9.52it/s]INIT | Starting | LUA bridge0, 8.93s/it] INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1
-
Kobold API URL for Chub Venus Ai
That is our developer version, its selectable in the Colab version dropdown and also available on https://github.com/henk717/koboldai
-
I got KoboldAI running on my computer and successfully connected it to Janitor, heres a small tutorial
Download Kobold from THIS LINK:https://github.com/henk717/KoboldAI. I downloaded Kobold from a different Github link and it wouldnt work, you need to get this specific one. Click on "code", then download zip
-
I created a repo on Github to categorize AI models. You can browse AIs from many categories!
https://github.com/henk717/KoboldAI https://github.com/LostRuins/koboldcpp/ https://github.com/ggerganov/llama.cpp https://github.com/AUTOMATIC1111/stable-diffusion-webui https://github.com/oobabooga/text-generation-webui
-
Meta’s new AI lets people make chatbots. They’re using it for sex.
For the third, I don't think Oobabooga supports the horde but KoboldAI does. I won't go into how to install KoboldAI since Oobabooga should give you enough freedom with 7B, 13B and maybe 30B models (depending on available RAM), but KoboldAI lets you download some models directly from the web interface, supports using online service providers to run the models for you, and supports the horde with a list of available models to choose from.
-
Kobold AI broke after update (New to this)
"Your Pytorch installation did not update correctly, you can solve this by running install_requirements.bat in the mode where it deletes the existing runtime. Alternative you can download a fresh copy of the offline installer for KoboldAI United from : https://github.com/henk717/KoboldAI/releases"
langchain
-
🗣️🤖 Ask to your Neo4J knowledge base in NLP & get KPIs
Langchain and the implementation of Custom Tools also is a great (and very efficient) way to setup a dedicated Q&A (for example for chat purpose) agent.
- LangChain – Some quick, high level thoughts on improvements/changes
-
Claude 2 Internal API Client and CLI
We're using it via langchain talking to Amazon Bedrock which is hosting Claude 1.x. It's comparable to GPT3.x, not bad. The integration doesn't seem to be fully there though, I think langchain is expecting "Human:" and "AI:", but Claude uses "Assistant:".
https://github.com/hwchase17/langchain/issues/2638
-
Any better alternatives to fine-tuning GPT-3 yet to create a custom chatbot persona based on provided knowledge for others to use?
Depending on how much work you want to put into it, you can get started at HuggingFace with their models and datasets, but you'd need compute power, multiple MLOps, etc. I was introduced to the concept in this video, since Google has their Vertex AI tools on Google Cloud, and there's always LangChain but I'm not sure about anything recent.
-
langchain VS griptape - a user suggested alternative
2 projects | 11 Jul 20232 projects | 9 Jul 2023
-
Vector storage is coming to Meilisearch to empower search through AI
a documentation chatbot proof of concept using GPT3.5 and LangChain
-
ChatPDF: What ChatGPT Can't Do, This Can!
I encourage everyone to pay attention to the Langchain open-source project and leverage it to achieve tasks that ChatGPT cannot handle.
- LangChain Arbitrary Command Execution - CVE-2023-34541
-
Langchain Is Pointless
Yeah I never know where memory goes exactly in langchain, it's not exactly clear all the time. But sure, the main insight I remember is this, take a look at their MULTI_PROMPT_ROUTER_TEMPLATE: https://github.com/hwchase17/langchain/blob/560c4dfc98287da1...
It's a lot of instructions for an LLM, they seem to forget an LLM is an auto-completion machine, and which data it is trained on. Using <<>> for sections is not a normal thing, it's not markdown, which probably the thing read way more often on the internet, instead of open json comments, why not type signatures, instead of so many rules, why not give it examples? It is an autocomplete machine!
They are relying too much on the LLM being smart because they probably only test stuff in GPT-4 and 3.5, but with GPT4All models this prompt was not working at all, so I had to rewrite it, for simple routing, we don't even need json, carying the `next_inputs` here is weird if you don't need it.
So this is my version of it: https://gist.github.com/rogeriochaves/b67676977eebb1936b9b5c...
It's so basic it's dumb, yet it is more powerful, as it does not rely on GPT-4 level intelligence, it's just what I needed
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
KoboldAI-Client
llama_index - LlamaIndex is a data framework for your LLM applications
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
llama - Inference code for Llama models
KoboldAI
stable-diffusion-webui - Stable Diffusion web UI
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
llama.cpp - LLM inference in C/C++
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.