How does using QLoRAs when running Llama on CPU work?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

  • It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py

  • FastChat

    An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

  • There is a script to merge lora or qlora weights with llama origin weights. https://github.com/lm-sys/FastChat/blob/main/fastchat/model/apply_lora.py

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • LoftQ: LoRA-fine-tuning-aware Quantization

    1 project | news.ycombinator.com | 19 Dec 2023
  • PEFT 0.5 supports fine-tuning GPTQ models

    1 project | /r/LocalLLaMA | 24 Aug 2023
  • Exploding loss when trying to train OpenOrca-Platypus2-13B

    1 project | /r/LocalLLaMA | 21 Aug 2023
  • [D] Is there a difference between p-tuning and prefix tuning ?

    1 project | /r/MachineLearning | 3 Jul 2023
  • How to merge the two weights into a single weight?

    3 projects | /r/LocalLLaMA | 9 Jun 2023