High-Speed Large Language Model Serving on PCs with Consumer-Grade GPUs

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • PowerInfer

    High-speed Large Language Model Serving on PCs with Consumer-grade GPUs

  • Cgml

    GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.

  • Since they mentioned they’re working on Mistral-7B, I’d like to note that my GPU-only implementation of Mistral uses slightly over 5GB of VRAM: https://github.com/Const-me/Cgml Runs pretty good on most consumer-grade GPUs.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • PowerInfer: Fast Large Language Model Serving with a Consumer-Grade GPU [pdf]

    3 projects | news.ycombinator.com | 19 Dec 2023
  • Ggml: Add Flash Attention

    1 project | news.ycombinator.com | 13 May 2024
  • Finetuning an LLM-Based Spam Classifier with LoRA from Scratch

    1 project | news.ycombinator.com | 11 May 2024
  • Structured: Extract Data from Unstructured Input with LLM

    3 projects | dev.to | 10 May 2024
  • IBM Granite: A Family of Open Foundation Models for Code Intelligence

    3 projects | news.ycombinator.com | 7 May 2024