Same as with Stable Diffusion, new AI based LAION, are coming up slowly but surely: Paper reduces resource requirement of a 175B model down to 16GB GPU

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • FlexGen

    Discontinued Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen] (by Ying1123)

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Run 70B LLM Inference on a Single 4GB GPU with This New Technique

    3 projects | news.ycombinator.com | 3 Dec 2023
  • Colorful Custom RTX 4060 Ti GPU Clocks Outed, 8 GB VRAM Confirmed

    1 project | /r/hardware | 17 Apr 2023
  • FlexGen: Running large language models on a single GPU

    1 project | /r/hypeurls | 26 Mar 2023
  • FlexGen: Running large language models on a single GPU

    1 project | /r/patient_hackernews | 26 Mar 2023
  • FlexGen: Running large language models on a single GPU

    1 project | /r/hackernews | 26 Mar 2023