WebGPT
web-llm
WebGPT | web-llm | |
---|---|---|
7 | 43 | |
3,516 | 9,102 | |
- | 2.4% | |
8.0 | 9.1 | |
4 months ago | 9 days ago | |
JavaScript | TypeScript | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WebGPT
- WebGPT: GPT Model on the Browser with WebGPU
- WebGPT: Run GPT model on the browser with WebGPU
- Pinbot – An extension to privately search one's browser history with AI
-
Browser Extension: "Find in Page" all synonyms instead of the exact word.
WebGPT might be a good starting point if anyone's interested: Repo. It basically is a local language model powered by the new WebGPU stuff.
- Run GPT model on the browser with WebGPU
-
WebGPU GPT Model Demo
Question. I can see in the code the WGSL that's needed to implement inference on the GPU. https://github.com/0hq/WebGPT/blob/main/kernels.js
Could this code also be used to train models or only for inference?
What I'm getting at, is could I take the WGSL and using rust wgpu create a mini ChatGPT that runs on all GPU's?
- WebGPT: Run GPT2 on the Browser with WebGPU
web-llm
-
Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
Looks like it uses this: https://github.com/mlc-ai/web-llm
- What stack would you recommend to build a LLM app in React without a backend?
-
When LLM doesn’t fit into memory, how to make it work?
So I was playing with MLC webllm locally. I got my mistral 7B model installed and quantised. Converted it using mlc lib to metal package for Apple chips. Now it takes only 3.5GB of memory
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
- Local embeddings model for javascript
-
this makes deploying AI language models so much easier
Link to github for those who want to know about MLC straight from them. Web demo is cool but takes a long time to load first time. https://github.com/mlc-ai/web-llm
-
April 2023
web-llm: Bringing large-language models and chat to web browsers. (https://github.com/mlc-ai/web-llm)
- Running a small model on a phone?
-
Weekly Megathread - 14 May 2023
WebLLM - https://mlc.ai/web-llm/
- WebLLM - Bringing LLMs based chatbot to your web browser
What are some alternatives?
FindSynonyms - Chrome extension that replaces words in web pages with their synonyms.
chainlit - Build Conversational AI in minutes ⚡️
gpt-tfjs - GPT in TensorFlow.js
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
transformers.js - State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
gpt4all - gpt4all: run open-source LLMs anywhere
three.js - JavaScript 3D Library.
StableLM - StableLM: Stability AI Language Models
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
duckdb-wasm - WebAssembly version of DuckDB
triton - Development repository for the Triton language and compiler
textSQL