MIOpen
web-llm
MIOpen | web-llm | |
---|---|---|
9 | 43 | |
983 | 9,300 | |
1.4% | 4.5% | |
9.7 | 9.1 | |
4 days ago | 12 days ago | |
Assembly | TypeScript | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MIOpen
- AI Libraries and AI Frameworks are "Not Available" for ROCm on Windows. Does that mean not yet, or never?
-
[Project] MLC LLM: Universal LLM Deployment with GPU Acceleration
More than three months behind schedule...
- Someone has run SD with a release candidate of ROCm 5.5 on RDNA 3 and gets 15 it/s
-
ROCm on a 7900 XTX
https://github.com/ROCmSoftwarePlatform/MIOpen/milestones Miopen for rocm 5.5 and 5.6 milestones are done.But you don't release those yet.I still don't understand, what is the point of selling AI/ML capable GPU without releasing the driver for that (rocm, etc).
- Sapphire Pulse 7900 xt (Techpowerup review)
- Man I wish I could do all this cool shit too
- Issues with Automatic1111 WebUI on Ubuntu 22.04.1 LTS with AMD GPU
- Miopen - AMD's Machine Intelligence Library
- Radeon ROCm 4.3 Released With HMM Allocations, Many Other Improvements
web-llm
-
Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
Looks like it uses this: https://github.com/mlc-ai/web-llm
- What stack would you recommend to build a LLM app in React without a backend?
-
When LLM doesn’t fit into memory, how to make it work?
So I was playing with MLC webllm locally. I got my mistral 7B model installed and quantised. Converted it using mlc lib to metal package for Apple chips. Now it takes only 3.5GB of memory
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
- Local embeddings model for javascript
-
this makes deploying AI language models so much easier
Link to github for those who want to know about MLC straight from them. Web demo is cool but takes a long time to load first time. https://github.com/mlc-ai/web-llm
-
April 2023
web-llm: Bringing large-language models and chat to web browsers. (https://github.com/mlc-ai/web-llm)
- Running a small model on a phone?
-
Weekly Megathread - 14 May 2023
WebLLM - https://mlc.ai/web-llm/
- WebLLM - Bringing LLMs based chatbot to your web browser
What are some alternatives?
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
chainlit - Build Conversational AI in minutes ⚡️
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
k-diffusion-directml - Karras et al. (2022) diffusion models for PyTorch
gpt4all - gpt4all: run open-source LLMs anywhere
stablediffusion-directml - High-Resolution Image Synthesis with Latent Diffusion Models
StableLM - StableLM: Stability AI Language Models
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
stable-diffusion-webui - Stable Diffusion web UI
duckdb-wasm - WebAssembly version of DuckDB