Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge. Learn more →
Web-llm Alternatives
Similar projects and alternatives to web-llm
-
mlc-llm
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
-
-
SurveyJS
A Non-Cloud Alternative to Google Forms that has it all.. SurveyJS JavaScript libraries allow you to easily set up a robust form management system fully integrated into your IT infrastructure where users can create and edit multiple dynamic JSON-based forms in a no-code form builder. Learn more now.
-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
-
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, llama.cpp (GGUF), Llama models.
-
bacalhau
Compute over Data framework for public, transparent, and optionally verifiable computation
-
Amplication
Amplication: open-source Node.js backend code generator. An open-source platform that helps developers build backends without spending time on boilerplate & repetitive coding. Including production-ready GraphQL & REST APIs, DB schema, DTOs, filtering, pagination, RBAC, & more.
-
-
-
-
LocalAI
:robot: Self-hosted, community-driven, local OpenAI compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. Free Open Source OpenAI alternative. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, gpt4all, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others
-
turbopilot
Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
-
-
-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
-
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
-
-
awesome-chatgpt
🧠 A curated list of awesome ChatGPT resources, including libraries, SDKs, APIs, and more. 🌟 Please consider supporting this project by giving it a star. (by eon01)
-
Appwrite
Appwrite - The open-source backend cloud platform. The open-source backend cloud platform for developing Web, Mobile, and Flutter applications. You can set up your backend faster with real-time APIs for authentication, databases, file storage, cloud functions, and much more!
web-llm reviews and mentions
-
April 2023
web-llm: Bringing large-language models and chat to web browsers. (https://github.com/mlc-ai/web-llm)
-
Weekly Megathread - 14 May 2023
WebLLM - https://mlc.ai/web-llm/
-
America Forgot About IBM Watson. Is ChatGPT Next?
The quality does not yet measure up exactly to ChatGPT (even 3.5), but yes it is possible
Probably the fastest way to get started is to look into [0] - this only requires a beta chromium browser with WebGPU. For a more integrated setup, I am under the impression [1] is the main tool used.
If you want to take a look at the quality possible before getting started, [2] is an online service by Hugging Face that hosts one of the best of the current generation of open models
-
How I built a Document Q&A Bot using PromptSandbox, a no-code AI chatbot builder
I realized that by utilizing sequences of OpenAI APIs and incorporating some basic logic, I could automate my repetitive tasks with AI solutions without incurring the high costs typically associated with AI apps. You would need an OpenAI Key, and that's the only thing you would pay for. All you need is an OpenAI Key, which is the sole expense involved. Promptsandbox itself is currently free of charge, and it plans to offer a generous free tier in the future, along with support for models like WebLLM that enable running LLMs entirely within the browser. I am thrilled about the upcoming developments.
-
Ask HN: What tech is under the radar with all attention on ChatGPT etc.
> In its current state, can you train on PyTorch, export to ONNX, load ONNX in JavaScript/WASM, then use it for WebGPU inference?
I believe so. Onnxruntime very recently merged a WebGPU backend: https://news.ycombinator.com/item?id=35694553
You can also go directly from PyTorch to WebGPU with Apache TVM. (ONNX is also supported, but my understanding is that it's better to go direct). This is an example using an LLM trained with PyTorch (I think) and run in the browser: https://mlc.ai/web-llm/
-
A brief history of LLaMA models
I had it running before with Dalai (https://github.com/cocktailpeanut/dalai) but have since moved to using the browser based WebGPU method (https://mlc.ai/web-llm/) which uses Vicuna 7B and is quite good.
-
[Project] MLC LLM: Universal LLM Deployment with GPU Acceleration
It’s pretty smooth to use a ML compiler to target various GPU backends - the project was originally only for WebGPUs (https://mlc.ai/web-llm/), which is around hundreds of lines, and then it only takes tens of lines to re-target it to Vulkan, Metal and CUDA!
-
Can AI models act as an offline internet?
You already can and also connect to your data
Give it shot (Windows/Linux/Mac installer):
Local version running in your browser (if you webgpu)
Transformers in browsers running localy (no gpu required)
https://xenova.github.io/transformers.js/
Exciting times
-
Hey 👋..!Can you guys post your portfolio for some inspiration..!
I custom built the UI with inspiration from several chat clients. For the backend, it's actually a few things. Starting in a few days I will default to WebLLM (https://mlc.ai/web-llm/) for users with Chrome 113 that can support WebGPU, which will run Vicuna 7B directly in the browser all client side. But currently the default is HuggingFace's Inference API (https://huggingface.co/inference-api) which is free to use. I also support OpenAI's API. They can be changed from within "/session.json".
-
Running Vicuna 7B on my Personal Website w/WebGPU
I've integrated WebLLM (https://mlc.ai/web-llm/) into my AI Chat app on my personal website (https://dustinbrett.com/) It runs all inference locally in the browser.
-
A note from our sponsor - InfluxDB
www.influxdata.com | 22 Sep 2023
Stats
mlc-ai/web-llm is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of web-llm is TypeScript.