GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • gptqlora

    GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ

  • The difference from QLoRA is that GPTQ is used instead of NF4 (Normal Float4) + DQ (Double Quantization) for model quantization. The advantage is that you can expect better performance because it provides better quantization than conventional bitsandbytes. The downside is that it is a one-shot quantization methodology, so it is more inconvenient than bitsandbytes, and unlike bitsandbytes, it is not universal. I'm still experimenting, but it seems to work. At least, I hope it can be more options for people using LoRA. https://github.com/qwopqwop200/gptqlora/tree/main

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts