Run 70B LLM Inference on a Single 4GB GPU with This New Technique

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  1. FlexLLMGen

    Discontinued Running large language models on a single GPU for throughput-oriented scenarios.

  2. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  3. petals

    🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

    There is already an implementation along the same line using the torrent architecture.

    https://petals.dev/

  4. nnl

    a low-latency and high-performance inference engine for large models on low-memory GPU platform.

    I did roughly the same thing in one of my hobby project https://github.com/fengwang/nnl. But in stead of using SSD, I load all the weights to the host memory, and while inferencing the model layer by layer, I asynchronously copy memory from global to shared memory in the hope of better performance. However, my approach is bounded by the PCI-E bandwidth.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Colorful Custom RTX 4060 Ti GPU Clocks Outed, 8 GB VRAM Confirmed

    1 project | /r/hardware | 17 Apr 2023
  • FlexGen: Running large language models on a single GPU

    1 project | /r/hypeurls | 26 Mar 2023
  • FlexGen: Running large language models on a single GPU

    1 project | /r/patient_hackernews | 26 Mar 2023
  • FlexGen: Running large language models on a single GPU

    1 project | /r/hackernews | 26 Mar 2023
  • FlexGen: Running large language models on a single GPU

    4 projects | news.ycombinator.com | 25 Mar 2023

Did you know that Python is
the 2nd most popular programming language
based on number of references?