privateGPT
DISCONTINUED
GPTQ-for-LLaMa
Our great sponsors
privateGPT | GPTQ-for-LLaMa | |
---|---|---|
1 | 19 | |
50,198 | 129 | |
- | - | |
- | 7.7 | |
6 days ago | 10 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
privateGPT
-
PrivateGPT exploring the Documentation
# install developer tools xcode-select --install # create python sandbox mkdir PrivateGTP cd privateGTP/ python3 -m venv . # actiavte local context source bin/activate # privateGTP uses poetry for python module management privateGTP> pip install poetry # sync privateGTP project privateGTP> git clone https://github.com/imartinez/privateGPT # enable MPS for model loading and processing privateGTP> CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python privateGTP> cd privateGPT # Import configure python dependencies privateGTP> poetry run python3 scripts/setup # launch web interface to confirm operational on default model privateGTP> python3 -m private_gpt # navigate safari browser to http://localhost:8001/ # To bulk import documentation needed to stop the web interface as vector database not in multi-user mode privateGTP> [control] + "C" # import some PDFs privateGTP> curl "https://docs.intersystems.com/irislatest/csp/docbook/pdfs.zip" -o /tmp/pdfs.zip privateGTP> unzip /tmp/pdfs.zip -d /tmp # took a few hours to process privateGTP> make ingest /tmp/pdfs/pdfs/ # launch web interface again for query documentation privateGTP> python3 -m private_gpt
GPTQ-for-LLaMa
-
I have tried various different methods to install, and none work. Can you spoon-feed me how?
git clone https://github.com/oobabooga/GPTQ-for-LLaMa
-
Help needed with installing quant_cuda for the WebUI
This worked for me on Ubuntu. If you want to use the CUDA branch instead of triton, do the same steps except clone this GPTQ-for-LLaMa fork and run python setup_cuda.py install
-
AutoGPTQ vs GPTQ-for-llama?
If you don't have triton and you use AutoGPTQ you're gonna notice a huge slow down compared to the old GPTQ-for-LLaMA cuda branch. For me AutoGPTQ gives me a whopping 1 token per second compared to the old GPTQ that gives me a decent 9 tokens per second.. both times I used a same sized model. (I think the slowdown is due to AutoGPTQ using the newer cuda branch which is much slower than the old one)
-
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure
Are you using a later version of GPTQ-for-LLaMa? If so, go to ooba's CUDA fork (https://github.com/oobabooga/GPTQ-for-LLaMa). That's what I made it in and it definitely works with that. And that's what's included in the one-click-installers.
No, I still use ooba's fork to ensure the widest compatibility. I would love to use a later version - specifically, I want to move to AutoGPTQ. But if I do that people who are still using ooba's fork (which is like 90% of people) can't use CPU offloading. They get a ton of errors and it just breaks.
-
GPT4All: A little helper to get started
cd text-generation-webui # wherever you have it installed mkdir -p repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda install
- wizard-vicuna-13B • Hugging Face
-
Anyone actually running 30b/65b at reasonably high speed? What's your rig?
I'm on GPTQ for LLaMA folder under repositories says it's pointed at https://github.com/oobabooga/GPTQ-for-LLaMa.git. But I've run through the instructions and also applied the monkey patch to train and apply 4 bit lora which may come into play. No idea.
-
Trying to run TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g with latest GPTQ-for-LLaMa CUDA branch
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
-
Error on model load: Torch not compiled with CUDA enabled
set-executionpolicy RemoteSigned -Scope CurrentUser python -m venv venv venv\Scripts\Activate.ps1 pip install torchaudio torch==2.0.0+cu118 torchvision ninja --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118 pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl pip install -r requirements.txt mkdir repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda cd GPTQ-for-LLaMa python -m pip install -r requirements.txt python setup_cuda.py install python server.py --chat --model-menu
What are some alternatives?
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
one-click-installers - Simplified installers for oobabooga/text-generation-webui.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
SillyTavern - LLM Frontend for Power Users.
Local-LLM-Comparison-Colab-UI - Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
ollama - Get up and running with Llama 2, Mistral, Gemma, and other large language models.