LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • AutoGPTQ

    An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.

  • In the wild, people tend to use GTPQ quantization for pure GPU inference: https://github.com/PanQiWei/AutoGPTQ

    And ggml's quant for CPU inference with some offload, which just got updated to a more GPTQ-like method days ago: https://github.com/ggerganov/llama.cpp/pull/1684

    Some other runtimes like Apache TVM also have their own quant implementations: https://github.com/mlc-ai/mlc-llm

    For training, 4-bit bitsandbytes is SOTA, as far as I know.

    TBH I'm not sure why this November paper is being linked. Few are running 8 bit models when they could fit a better 3-5 bit model in the same memory pool.

  • llama.cpp

    LLM inference in C/C++

  • In the wild, people tend to use GTPQ quantization for pure GPU inference: https://github.com/PanQiWei/AutoGPTQ

    And ggml's quant for CPU inference with some offload, which just got updated to a more GPTQ-like method days ago: https://github.com/ggerganov/llama.cpp/pull/1684

    Some other runtimes like Apache TVM also have their own quant implementations: https://github.com/mlc-ai/mlc-llm

    For training, 4-bit bitsandbytes is SOTA, as far as I know.

    TBH I'm not sure why this November paper is being linked. Few are running 8 bit models when they could fit a better 3-5 bit model in the same memory pool.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • mlc-llm

    Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

  • In the wild, people tend to use GTPQ quantization for pure GPU inference: https://github.com/PanQiWei/AutoGPTQ

    And ggml's quant for CPU inference with some offload, which just got updated to a more GPTQ-like method days ago: https://github.com/ggerganov/llama.cpp/pull/1684

    Some other runtimes like Apache TVM also have their own quant implementations: https://github.com/mlc-ai/mlc-llm

    For training, 4-bit bitsandbytes is SOTA, as far as I know.

    TBH I'm not sure why this November paper is being linked. Few are running 8 bit models when they could fit a better 3-5 bit model in the same memory pool.

  • SpQR

  • posted here https://news.ycombinator.com/item?id=36216126 but no traction

    The paper is entitled "SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression" and https://twitter.com/Tim_Dettmers/status/1666076553665744896 is a nice summary

    Code here: https://github.com/Vahe1994/SpQR (https://news.ycombinator.com/item?id=36219128 but no traction )

  • sentencepiece

    Unsupervised text tokenizer for Neural Network-based text generation.

  • you need to train the model on 1 trillion tokens (https://platform.openai.com/tokenizer https://github.com/google/sentencepiece) anyways for it to get reasoning capacities, which it feels very unlikely that your data is that much.

    I'm highly skeptical that you have enough data to pretrain if you don't have enough data to fine tune.

    fine tuning + vector search + prompting of as much stuff as you can, on a LLM like palm2 or gpt4 is what I would do. otherwise you can use falcon 40B ofc.

    maybe I should charge for this ahah

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts