KoboldAI
transformers
Our great sponsors
KoboldAI | transformers | |
---|---|---|
41 | 175 | |
327 | 125,021 | |
- | 3.1% | |
9.5 | 10.0 | |
14 days ago | 5 days ago | |
C++ | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KoboldAI
-
LLM spews nonsense in CVE report for curl
Itβs not that big a task as all that. There are a lot of unaligned models available, and user interfaces that arenβt that hard to use.
https://github.com/henk717/KoboldAI
-
Chat with, and help host, a free community LLM "horde"
https://github.com/henk717/KoboldAI
- Hosts pick a quantized community LLM to run, which is (IMO) the real magic of this system. Cloud services tend to run generic Llama chat/instruct models, OpenAI API models, or maybe a single proprietary finetune, but the Llama/Mistral finetuning community is red hot. New finetines and crazy merges/hybrids that outperform llama-chat in specific tasks (mostly Chat/Story/RP) come out every day, and each one has a different "flavor" and format:
https://huggingface.co/models?sort=modified&search=mistral+g...
- Run LLMs with KoboldaAI on Intel ARC
-
No idea what I'm doing help
Sourceforge is our official version but that one is to old to run newer models like Holomax, the releases for United can be found here : https://github.com/henk717/KoboldAI/releases
-
Still getting "read only" on JanitorAI even after setting model. Do I need to change anything config wise to get it to use pygmalion?
Colab Check: False, TPU: False INIT | OK | KAI Horde Models INFO | __main__::648 - We loaded the following model backends: KoboldAI API KoboldAI Old Colab Method Huggingface GooseAI Horde OpenAI Read Only INFO | __main__:general_startup:1363 - Running on Repo: https://github.com/henk717/koboldai Branch: INIT | Starting | Flask INIT | OK | Flask INIT | Starting | Webserver INIT | OK | Webserver MESSAGE | Webserver started! You may now connect with a browser at http://127.0.0.1:8501 INIT | Searching | GPU support INIT | Found | GPU support INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Traceback (most recent call last): File "B:\python\lib\site-packages\eventlet\hubs\selects.py", line 59, in wait listeners.get(fileno, hub.noop).cb(fileno) File "B:\python\lib\site-packages\eventlet\greenthread.py", line 221, in main result = function(*args, **kwargs) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 837, in process_request proto.__init__(conn_state, self) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 352, in __init__ self.finish() File "B:\python\lib\site-packages\eventlet\wsgi.py", line 751, in finish BaseHTTPServer.BaseHTTPRequestHandler.finish(self) File "B:\python\lib\socketserver.py", line 811, in finish self.wfile.close() File "B:\python\lib\socket.py", line 687, in write return self._sock.send(b) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 401, in send return self._send_loop(self.fd.send, data, flags) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 388, in _send_loop return send_method(data, *args) ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine Removing descriptor: 1488 Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1 TODO: Allow config INFO | modeling.inference_models.hf:set_input_parameters:189 - {'0_Layers': 18, 'CPU_Layers': 10, 'Disk_Layers': 0, 'class': 'model', 'label': 'PygmalionAI_pygmalion-6b', 'id': 'PygmalionAI_pygmalion-6b', 'name': 'PygmalionAI_pygmalion-6b', 'size': '', 'menu': 'Custom', 'path': 'C:\\KoboldAI\\models\\PygmalionAI_pygmalion-6b', 'ismenu': 'false', 'plugin': 'Huggingface'} INIT | Searching | GPU support INIT | Found | GPU support Loading checkpoint shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:19<00:00, 9.60s/it] Loading model tensors: 100%|##########| 56/56 [00:05<00:00, 9.52it/s]INIT | Starting | LUA bridge0, 8.93s/it] INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1
-
Kobold API URL for Chub Venus Ai
That is our developer version, its selectable in the Colab version dropdown and also available on https://github.com/henk717/koboldai
-
I got KoboldAI running on my computer and successfully connected it to Janitor, heres a small tutorial
Download Kobold from THIS LINK:https://github.com/henk717/KoboldAI. I downloaded Kobold from a different Github link and it wouldnt work, you need to get this specific one. Click on "code", then download zip
-
I created a repo on Github to categorize AI models. You can browse AIs from many categories!
https://github.com/henk717/KoboldAI https://github.com/LostRuins/koboldcpp/ https://github.com/ggerganov/llama.cpp https://github.com/AUTOMATIC1111/stable-diffusion-webui https://github.com/oobabooga/text-generation-webui
-
Metaβs new AI lets people make chatbots. Theyβre using it for sex.
For the third, I don't think Oobabooga supports the horde but KoboldAI does. I won't go into how to install KoboldAI since Oobabooga should give you enough freedom with 7B, 13B and maybe 30B models (depending on available RAM), but KoboldAI lets you download some models directly from the web interface, supports using online service providers to run the models for you, and supports the horde with a list of available models to choose from.
-
Kobold AI broke after update (New to this)
"Your Pytorch installation did not update correctly, you can solve this by running install_requirements.bat in the mode where it deletes the existing runtime. Alternative you can download a fresh copy of the offline installer for KoboldAI United from : https://github.com/henk717/KoboldAI/releases"
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the βtraxβ repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you wantπ‘
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please β€οΈ
-
Schedule-Free Learning β A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore β 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
KoboldAI-Client
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
llama - Inference code for Llama models
KoboldAI
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
stable-diffusion-webui - Stable Diffusion web UI
llama.cpp - LLM inference in C/C++
huggingface_hub - The official Python client for the Huggingface Hub.