Run 70B LLM Inference on a Single 4GB GPU with This New Technique

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • FlexGen

    Running large language models on a single GPU for throughput-oriented scenarios.

  • petals

    🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

  • There is already an implementation along the same line using the torrent architecture.

    https://petals.dev/

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • nnl

    a low-latency and high-performance inference engine for large models on low-memory GPU platform.

  • I did roughly the same thing in one of my hobby project https://github.com/fengwang/nnl. But in stead of using SSD, I load all the weights to the host memory, and while inferencing the model layer by layer, I asynchronously copy memory from global to shared memory in the hope of better performance. However, my approach is bounded by the PCI-E bandwidth.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts