Paper reduces resource requirement of a 175B model down to 16GB GPU

This page summarizes the projects mentioned and recommended in the original post on /r/ChatGPTforall

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • FlexGen

    Discontinued Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen] (by Ying1123)

  • FlexGen

    Running large language models on a single GPU for throughput-oriented scenarios.

  • New link is now: https://github.com/FMInference/FlexGen/blob/main/docs/paper.pdf

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts