anaconda-issues
KoboldAI
anaconda-issues | KoboldAI | |
---|---|---|
10 | 41 | |
641 | 331 | |
0.0% | - | |
6.6 | 9.5 | |
3 months ago | 7 days ago | |
C++ | ||
- | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
anaconda-issues
-
ssl certifiates are broken
Google search of the issue also turned up this thread: https://github.com/ContinuumIO/anaconda-issues/issues/72
- Model 8bit Optimization Through WSL
-
Why is Anaconda running so slow in my terminal?
I found this post on github that describes the same issue I'm facing but the solutions didn't help.
-
Python 3 Types in the Wild
A scientist typically wouldn't write web backend, a sysadmin doesn't do a lot of statistical stuff, etc.
A small startup might do well to make their MVP in Python, but as the code grows the implicit costs (of using Python) do too.
- - - -
In re: Rust, sorry I wasn't clear above. I don't mean that Rust is a glue language, I mean that people write e.g. grep replacements in it and things like that. Python does systems programming by being glue, Rust does it by being, well, Rust. It makes sense to me that Rust libs would get Python wrappers, but it also seems to me that that adds to my argument: Python is good for small glue, but crunchy things (like grep) should be written in e.g. Rust or Go or something.
- - - -
One other things about Python is that the packaging & distribution "story" is ridiculous now. The people in charge of that call themselves the Python Packaging Authority (which name, given what they're doing, reminds me of Brazil the movie) and they seem to me to be running amok, cargo-culting the crap out of what should be a pretty simple and straightforward problem. I could go on but I feel a rant brewing, so I'll cut it off there.
It's not just the PyPA folks that are having problems packaging and distributing Python. The Conda folks ship Tkinter in a broken state for five years now: https://github.com/ContinuumIO/anaconda-issues/issues/6833 That's the default GUI system that ships with the Python Standard Library.
Compare and contrast with Rust's Cargo, or Nim's Nimble, or Erlang's Rebar, etc.
-
tkinter font is pixelated
If so, you can check this solution. I've encountered this same issue when using Anaconda or Conda. https://github.com/ContinuumIO/anaconda-issues/issues/6833
- Astrophysicist wants to learn Python
- I know everyone hates Waves, but I seriously hate IK Multimedia even more. Anyone else?
-
cannot connect to Centos 8 server from windows 10 PC using xrdp, macOS ok
https://github.com/ContinuumIO/anaconda-issues/issues/1206#issuecomment-258672013
- Windows 10 5.0.1 install gets stuck at "Anaconda3\pkgs\.install.py" · Issue #7587 · ContinuumIO/anaconda-issues
KoboldAI
-
LLM spews nonsense in CVE report for curl
It’s not that big a task as all that. There are a lot of unaligned models available, and user interfaces that aren’t that hard to use.
https://github.com/henk717/KoboldAI
-
Chat with, and help host, a free community LLM "horde"
https://github.com/henk717/KoboldAI
- Hosts pick a quantized community LLM to run, which is (IMO) the real magic of this system. Cloud services tend to run generic Llama chat/instruct models, OpenAI API models, or maybe a single proprietary finetune, but the Llama/Mistral finetuning community is red hot. New finetines and crazy merges/hybrids that outperform llama-chat in specific tasks (mostly Chat/Story/RP) come out every day, and each one has a different "flavor" and format:
https://huggingface.co/models?sort=modified&search=mistral+g...
- Run LLMs with KoboldaAI on Intel ARC
-
No idea what I'm doing help
Sourceforge is our official version but that one is to old to run newer models like Holomax, the releases for United can be found here : https://github.com/henk717/KoboldAI/releases
-
Still getting "read only" on JanitorAI even after setting model. Do I need to change anything config wise to get it to use pygmalion?
Colab Check: False, TPU: False INIT | OK | KAI Horde Models INFO | __main__::648 - We loaded the following model backends: KoboldAI API KoboldAI Old Colab Method Huggingface GooseAI Horde OpenAI Read Only INFO | __main__:general_startup:1363 - Running on Repo: https://github.com/henk717/koboldai Branch: INIT | Starting | Flask INIT | OK | Flask INIT | Starting | Webserver INIT | OK | Webserver MESSAGE | Webserver started! You may now connect with a browser at http://127.0.0.1:8501 INIT | Searching | GPU support INIT | Found | GPU support INIT | Starting | LUA bridge INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Traceback (most recent call last): File "B:\python\lib\site-packages\eventlet\hubs\selects.py", line 59, in wait listeners.get(fileno, hub.noop).cb(fileno) File "B:\python\lib\site-packages\eventlet\greenthread.py", line 221, in main result = function(*args, **kwargs) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 837, in process_request proto.__init__(conn_state, self) File "B:\python\lib\site-packages\eventlet\wsgi.py", line 352, in __init__ self.finish() File "B:\python\lib\site-packages\eventlet\wsgi.py", line 751, in finish BaseHTTPServer.BaseHTTPRequestHandler.finish(self) File "B:\python\lib\socketserver.py", line 811, in finish self.wfile.close() File "B:\python\lib\socket.py", line 687, in write return self._sock.send(b) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 401, in send return self._send_loop(self.fd.send, data, flags) File "B:\python\lib\site-packages\eventlet\greenio\base.py", line 388, in _send_loop return send_method(data, *args) ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine Removing descriptor: 1488 Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1 TODO: Allow config INFO | modeling.inference_models.hf:set_input_parameters:189 - {'0_Layers': 18, 'CPU_Layers': 10, 'Disk_Layers': 0, 'class': 'model', 'label': 'PygmalionAI_pygmalion-6b', 'id': 'PygmalionAI_pygmalion-6b', 'name': 'PygmalionAI_pygmalion-6b', 'size': '', 'menu': 'Custom', 'path': 'C:\\KoboldAI\\models\\PygmalionAI_pygmalion-6b', 'ismenu': 'false', 'plugin': 'Huggingface'} INIT | Searching | GPU support INIT | Found | GPU support Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:19<00:00, 9.60s/it] Loading model tensors: 100%|##########| 56/56 [00:05<00:00, 9.52it/s]INIT | Starting | LUA bridge0, 8.93s/it] INIT | OK | LUA bridge INIT | Starting | LUA Scripts INIT | OK | LUA Scripts Setting Seed Connection Attempt: 127.0.0.1 INFO | __main__:do_connect:2574 - Client connected! UI_1
-
Kobold API URL for Chub Venus Ai
That is our developer version, its selectable in the Colab version dropdown and also available on https://github.com/henk717/koboldai
-
I got KoboldAI running on my computer and successfully connected it to Janitor, heres a small tutorial
Download Kobold from THIS LINK:https://github.com/henk717/KoboldAI. I downloaded Kobold from a different Github link and it wouldnt work, you need to get this specific one. Click on "code", then download zip
-
I created a repo on Github to categorize AI models. You can browse AIs from many categories!
https://github.com/henk717/KoboldAI https://github.com/LostRuins/koboldcpp/ https://github.com/ggerganov/llama.cpp https://github.com/AUTOMATIC1111/stable-diffusion-webui https://github.com/oobabooga/text-generation-webui
-
Meta’s new AI lets people make chatbots. They’re using it for sex.
For the third, I don't think Oobabooga supports the horde but KoboldAI does. I won't go into how to install KoboldAI since Oobabooga should give you enough freedom with 7B, 13B and maybe 30B models (depending on available RAM), but KoboldAI lets you download some models directly from the web interface, supports using online service providers to run the models for you, and supports the horde with a list of available models to choose from.
-
Kobold AI broke after update (New to this)
"Your Pytorch installation did not update correctly, you can solve this by running install_requirements.bat in the mode where it deletes the existing runtime. Alternative you can download a fresh copy of the offline installer for KoboldAI United from : https://github.com/henk717/KoboldAI/releases"
What are some alternatives?
Projects - :page_with_curl: A list of practical projects that anyone can solve in any programming language.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
KoboldAI-Client
pytube-dl - this python application can be used to download youtube videos , thumbnails and descriptions.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
cligen - Nim library to infer/generate command-line-interfaces / option / argument parsing; Docs at
KoboldAI
NumPy - The fundamental package for scientific computing with Python.
stable-diffusion-webui - Stable Diffusion web UI
Nim - Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).
llama.cpp - LLM inference in C/C++