web-llm
chainlit
web-llm | chainlit | |
---|---|---|
42 | 7 | |
9,102 | 5,401 | |
2.4% | 5.6% | |
9.1 | 9.7 | |
8 days ago | 8 days ago | |
TypeScript | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
web-llm
- What stack would you recommend to build a LLM app in React without a backend?
-
When LLM doesn’t fit into memory, how to make it work?
So I was playing with MLC webllm locally. I got my mistral 7B model installed and quantised. Converted it using mlc lib to metal package for Apple chips. Now it takes only 3.5GB of memory
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
- Local embeddings model for javascript
-
this makes deploying AI language models so much easier
Link to github for those who want to know about MLC straight from them. Web demo is cool but takes a long time to load first time. https://github.com/mlc-ai/web-llm
-
April 2023
web-llm: Bringing large-language models and chat to web browsers. (https://github.com/mlc-ai/web-llm)
- Running a small model on a phone?
-
Weekly Megathread - 14 May 2023
WebLLM - https://mlc.ai/web-llm/
- WebLLM - Bringing LLMs based chatbot to your web browser
-
Google is bringing AI to the browser with WebGPU in Chrome
which makes this works in the browser
https://mlc.ai/web-llm/#chat-demo
chainlit
-
Chat with your Github Repo using llama_index and chainlit
chainlit is open source project that makes it very easy to build frontend interfaces like chatgpt and other features that are required for conversational ai app, so we can focus on the core part and don't need to worry about basic things, and it is dead simple to work with
- Show HN: I made an app to use local AI as daily driver
- Help with conversational_qa_chain - Streamlit Messages
-
AI Chatbot powered by Amazon Bedrock 🚀🤖
I have created a sample chatbot application that uses Chainlit and LangChain to showcase Amazon Bedrock.
- Chainlit: Create ChatGPT-like UIs on top of Python code
-
Personal movie recommendation agent with GPT4 + Neo4J
For the interface, I'm using chainlit, a new UI library for building LLM apps, with an integration with Langchain.
- Chainlit/chainlit: Build Python LLM apps in minutes ⚡️
What are some alternatives?
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
langchain - 🦜🔗 Build context-aware reasoning applications
gpt4all - gpt4all: run open-source LLMs anywhere
nitro - Create apps 10x quicker, without Javascript/HTML/CSS.
StableLM - StableLM: Stability AI Language Models
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
react-llm - Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. Just useLLM().
duckdb-wasm - WebAssembly version of DuckDB
FreedomGPT - This codebase is for a React and Electron-based app that executes the FreedomGPT LLM locally (offline and private) on Mac and Windows using a chat-based interface
triton - Development repository for the Triton language and compiler
open-webui - User-friendly WebUI for LLMs (Formerly Ollama WebUI)