one-click-installers
WizardVicunaLM
one-click-installers | WizardVicunaLM | |
---|---|---|
18 | 12 | |
470 | 711 | |
- | - | |
8.9 | 6.8 | |
7 months ago | 11 months ago | |
Python | ||
GNU Affero General Public License v3.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
one-click-installers
-
amd gpus on windows support?
AMD does not offer installation options for ROCm on Windows. I'm not familiar with the workarounds to make it work; if you find a solution, you can contribute it to https://github.com/oobabooga/one-click-installers/
-
Oobabooga for Windows
Running start_windows.bat should take care of everything.
-
Quant-Cude Error?
Had the same issue, turns out I was using an old 1 click installer / updater, you need to use https://github.com/oobabooga/one-click-installers and reinstall everything from scratch
-
Cant find the "start: file.
Are you sure you're looking at the right folder? start_windows.bat is there. It's listed in the source code: https://github.com/oobabooga/one-click-installers
- Any UI that allows Windows + AMD GPU ?
- WizardLM-30B-Uncensored
-
13b-4bit-128g - Trying to run compressed model without success. ( problem exist only with 13b models for some reason ) No error code has been displayed.
one-click-installers/INSTRUCTIONS.TXT
-
GPT4All: A little helper to get started
https://github.com/oobabooga/one-click-installers/issues/56 they explain it over here.
-
Visual Studio compile errors
I solved this by adding the Individual components 2019 Windows 10 SDK, C++ CMake tools for Windows, and MSVC v142 - VS 2019 C++ build tools. See https://github.com/oobabooga/one-click-installers/issues/56
-
python setup.py bdist_wheel did not run successfully.
It appears one of the extensions isn't pre-compiled on install. I believe you have the same problem as listed here. https://github.com/oobabooga/one-click-installers/issues/56
WizardVicunaLM
-
WizardLM-13B-V1.0-Uncensored
HELP! I need some clarification. I'm familiar with Wizard-Vicuna-13b-Uncensored which is EHartford's uncensoring of WizardVicunaLM.
-
Ask HN: Should I cancel my GPT-4 subscription and get Copilot instead?
> I’m also open to open source models but I hear they’re not even as good as gpt3.5.
WizardVicunaLM claims ~97% performance relative to GPT3.5: https://github.com/melodysdreamj/WizardVicunaLM
It's not particularly great at generating code, but it's uncensored and writes fantastic prose. I've been using it for the last week and I'm really satisfied with where it stands.
> It’s sad that we’re stuck in this monopoly of powerful LLMs.
Won't anyone just sponsor a few months of dedicated GPU training, finetuning and quantizing so they can be held legally accountable for it's output?
I wouldn't hold my breath.
-
Wizard-Vicuna-30B-Uncensored
Also, just noticed that you may have forgotten to update the readme, which references 13b, not 30b, thought maybe that was intentional. (If you linked directly to the Github ("WizardVicunaLM"), that would make it a bit easier for people like me to follow))
-
Where we’re at with self-hosted AI today?
There are a lot of options. Right now I'm using WizardVicunaLM to great success: https://github.com/melodysdreamj/WizardVicunaLM
It combines the uncensored WizardLM data with the Vicuna tuning to create a surprisingly high-performance model. If the chart on their GitHub page is to be believed, their model approaches GPT-3.5 performance.
-
WizardLM-30B-Uncensored
Here is the codebase and dataset for WizardVicuna https://github.com/melodysdreamj/WizardVicunaLM https://github.com/lm-sys/FastChat https://huggingface.co/datasets/RyokoAI/ShareGPT52K
- LLM that combines the principles of wizardLM and vicunaLM
-
[P] airoboros 7b - instruction tuned on 100k synthetic instruction/responses
I used the same questions from WizardVicunaLM:
- Is there a "rut" that we're in on the way to general AI?
- WizardLM-13B-Uncensored
-
Weekly Megathread
https://github.com/melodysdreamj/WizardVicunaLM - Combining WizardLM and Vicuña Principle. Made by u/Clear-Jelly2873
What are some alternatives?
GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
gpt4all - gpt4all: run open-source LLMs anywhere
llama.cpp - LLM inference in C/C++
gradio - Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
KoboldAI
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
micromamba-releases - Micromamba executables mirrored from conda-forge as Github releases
nsfw-prompt-detection-sd - NSFW Prompt Detection for Stable Diffusion
Llama-X - Open Academic Research on Improving LLaMA to SOTA LLM
shap-e - Generate 3D objects conditioned on text or images