qlora VS privateGPT

Compare qlora vs privateGPT and see what are their differences.

qlora

QLoRA: Efficient Finetuning of Quantized LLMs (by artidoro)

privateGPT

Interact with your documents using the power of GPT, 100% privately, no data leaks [Moved to: https://github.com/zylon-ai/private-gpt] (by imartinez)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
qlora privateGPT
80 1
9,388 50,198
- -
7.4 -
7 months ago about 1 month ago
Jupyter Notebook Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

qlora

Posts with mentions or reviews of qlora. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-30.
  • FLaNK Stack Weekly for 30 Oct 2023
    24 projects | dev.to | 30 Oct 2023
  • I released Marx 3B V3.
    1 project | /r/LocalLLaMA | 25 Oct 2023
    Marx 3B V3 is StableLM 3B 4E1T instruction tuned on EverythingLM Data V3(ShareGPT Format) for 2 epochs using QLoRA.
  • Tuning and Testing Llama 2, Flan-T5, and GPT-J with LoRA, Sematic, and Gradio
    2 projects | news.ycombinator.com | 26 Jul 2023
    https://github.com/artidoro/qlora

    The tools and mechanisms to get a model to do what you want is ever so changing, ever so quickly. Build and understand a notebook yourself, and reduce dependencies. You will need to switch them.

  • Yet another QLoRA tutorial
    2 projects | /r/LocalLLaMA | 24 Jul 2023
    My own project right now is still in raw generated form, and this now makes me think about trying qlora's scripts since this gives me some confidence I should be able to get it to turn out now that someone else has carved a path and charted the map. I was going to target llamatune which was mentioned here the other day.
  • Creating a new Finetuned model
    3 projects | /r/LocalLLaMA | 11 Jul 2023
    Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
  • [R] LaVIN-lite: Training your own Multimodal Large Language Models on one single GPU with competitive performance! (Technical Details)
    2 projects | /r/MachineLearning | 4 Jul 2023
    4-bit quantization training mainly refers to qlora. Simply put, qlora quantizes the weights of the LLM into 4-bit for storage, while dequantizing them into 16-bit during the training process to ensure training precision. This method significantly reduces GPU memory overhead during training (the training speed should not vary much). This approach is highly suitable to be combined with parameter-efficient methods. However, the original paper was designed for single-modal LLMs and the code has already been wrapped in HuggingFace's library. Therefore, we extracted the core code from HuggingFace's library and migrated it into LaVIN's code. The main principle is to replace all linear layers in LLM with 4-bit quantized layers. Those interested can refer to our implementation in quantization.py and mm_adaptation.py, which is roughly a dozen lines of code.
  • [D] To all the machine learning engineers: most difficult model task/type you’ve ever had to work with?
    2 projects | /r/MachineLearning | 3 Jul 2023
    There have been some new development like QLora which help fine-tune LLMs without updating all the weights.
  • Finetune MPT-30B using QLORA
    2 projects | /r/LocalLLaMA | 3 Jul 2023
    This might be helpful: https://github.com/artidoro/qlora/issues/10
  • is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning?
    1 project | /r/LocalLLaMA | 29 Jun 2023
    curious, since qlora paper only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in paper)it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora?
  • Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning.
    1 project | /r/LocalLLaMA | 29 Jun 2023
    This might not be appropriate answer but did you take a look at this repository? https://github.com/artidoro/qlora With artidoro's repository it's pretty easy to train qlora. You just prepare your own dataset and run the following command: python qlora.py --model_name_or_path --dataset="path/to/your/dataset" --dataset_format="self-instruct" This is only available for several dataset formats. But every dataset format has to have input-output pairs. So the dataset json format has to be like this [ { “input”: “something ”, “output”:“something ” }, { “input”: “something ”, “output”:“something ” } ]

privateGPT

Posts with mentions or reviews of privateGPT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-23.
  • PrivateGPT exploring the Documentation
    2 projects | dev.to | 23 Mar 2024
    # install developer tools xcode-select --install # create python sandbox mkdir PrivateGTP cd privateGTP/ python3 -m venv . # actiavte local context source bin/activate # privateGTP uses poetry for python module management privateGTP> pip install poetry # sync privateGTP project privateGTP> git clone https://github.com/imartinez/privateGPT # enable MPS for model loading and processing privateGTP> CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python privateGTP> cd privateGPT # Import configure python dependencies privateGTP> poetry run python3 scripts/setup # launch web interface to confirm operational on default model privateGTP> python3 -m private_gpt # navigate safari browser to http://localhost:8001/ # To bulk import documentation needed to stop the web interface as vector database not in multi-user mode privateGTP> [control] + "C" # import some PDFs privateGTP> curl "https://docs.intersystems.com/irislatest/csp/docbook/pdfs.zip" -o /tmp/pdfs.zip privateGTP> unzip /tmp/pdfs.zip -d /tmp # took a few hours to process privateGTP> make ingest /tmp/pdfs/pdfs/ # launch web interface again for query documentation privateGTP> python3 -m private_gpt

What are some alternatives?

When comparing qlora and privateGPT you can also consider the following projects:

alpaca-lora - Instruct-tune LLaMA on consumer hardware

localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

anything-llm - The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

gpt4all - gpt4all: run open-source LLMs anywhere

ggml - Tensor library for machine learning

h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/

alpaca_lora_4bit

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

llm-foundry - LLM training code for Databricks foundation models

langchain - 🦜🔗 Build context-aware reasoning applications