WebGPU GPT Model Demo

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • web-llm

    Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.

  • It indeed works and loads quick. I am more interested currently in the Vicuna 7B example from https://mlc.ai/web-llm/

    Also instead of just "Update Chrome to v113" the domain owner could sign up for an origin trial https://developer.chrome.com/origintrials/#/view_trial/11821...

  • WebGPT

    Run GPT model on the browser with WebGPU. An implementation of GPT inference in less than ~1500 lines of vanilla Javascript.

  • Question. I can see in the code the WGSL that's needed to implement inference on the GPU. https://github.com/0hq/WebGPT/blob/main/kernels.js

    Could this code also be used to train models or only for inference?

    What I'm getting at, is could I take the WGSL and using rust wgpu create a mini ChatGPT that runs on all GPU's?

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • WebGPT: GPT Model on the Browser with WebGPU

    1 project | news.ycombinator.com | 1 Apr 2024
  • What stack would you recommend to build a LLM app in React without a backend?

    2 projects | /r/react | 8 Dec 2023
  • When LLM doesn’t fit into memory, how to make it work?

    1 project | /r/LocalLLaMA | 5 Nov 2023
  • WebGPT: Run GPT model on the browser with WebGPU

    1 project | news.ycombinator.com | 12 Aug 2023
  • Local embeddings model for javascript

    1 project | /r/LangChain | 11 Jul 2023