-
gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
GPTQ is better than GGML quantization, because it reoptimizes the weights to compensate for the lost accuracy. With 4 bit and groupsize 128 it can approximate, the FP16 performance pretty good. GGML just does round to nearest (RTN) without reoptimizing the weights against some dataset (generally the C4 dataset, as per default GPTQ-for-LLaMA configuration). But llama.cpp could probably implement such a method themselves, the paper is freely available: https://arxiv.org/abs/2210.17323