gpt-discord-bot
web-llm
Our great sponsors
gpt-discord-bot | web-llm | |
---|---|---|
7 | 42 | |
1,709 | 9,018 | |
2.4% | 4.2% | |
4.2 | 9.0 | |
15 days ago | 7 days ago | |
Python | TypeScript | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-discord-bot
-
Discord bot trained on custom knowledge
I’ve set up an initial bot using the gpt-discord-bot package. I’m wondering how to train it with more data like an entire PDF for example instead of editing the instructions in config.yaml. Also, how would I go about hosting this so it can run constantly, do I just run in on a standard Linux server with certain firewall settings?
-
GPT Discord Bot personality, persistence and hosting
Hey guys, I'm new to AI chat bots but eager to learn. I'm playing around with [gpt-discord-bot](https://github.com/openai/gpt-discord-bot) and have a few questions to get me up and running if someone has time:
-
Most efficient way to set up API serving of custom LLMs?
And here's a Discord bot that currently works with it that you may be able to learn from: https://github.com/openai/gpt-discord-bot
-
I turned ChatGPT into a Discord bot with a voice and may have summoned AI Lucifer
Here's the page that tells you how to do it but you'll need some programming knowledge in python to get it to work. it's not just something you can invite to your server. https://github.com/openai/gpt-discord-bot
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
Using Davinci 003, can we make it always pretend it’s someone else?
This one, right here >> https://github.com/openai/gpt-discord-bot Follow the instruction and you'll get it working.
-
Paid $42 for ChatGPT Pro Yesterday and “getting at capacity error”
Go to the official OpenAI Discord - https://discord.gg/openai and then go to #gpt-discord-bot and that'll send you to https://github.com/openai/gpt-discord-bot to get the code. I'm running the code on a RaspberryPi but originally I ran it on my MacBook. Super easy to setup. Just needs an API key from OpenAI you can get here: https://beta.openai.com/account/api-keys once you give them a credit card for billing https://beta.openai.com/account/billing/overview and you can set limits on what they charge you. It's honestly super cheap. For Discord you just need a server you own to invite the bot to and of course Discord lets you setup a server for free.
web-llm
- What stack would you recommend to build a LLM app in React without a backend?
-
When LLM doesn’t fit into memory, how to make it work?
So I was playing with MLC webllm locally. I got my mistral 7B model installed and quantised. Converted it using mlc lib to metal package for Apple chips. Now it takes only 3.5GB of memory
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
- Local embeddings model for javascript
-
this makes deploying AI language models so much easier
Link to github for those who want to know about MLC straight from them. Web demo is cool but takes a long time to load first time. https://github.com/mlc-ai/web-llm
-
April 2023
web-llm: Bringing large-language models and chat to web browsers. (https://github.com/mlc-ai/web-llm)
- Running a small model on a phone?
-
Weekly Megathread - 14 May 2023
WebLLM - https://mlc.ai/web-llm/
- WebLLM - Bringing LLMs based chatbot to your web browser
-
Google is bringing AI to the browser with WebGPU in Chrome
which makes this works in the browser
https://mlc.ai/web-llm/#chat-demo
What are some alternatives?
serge - A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
chainlit - Build Conversational AI in minutes ⚡️
turbopilot - Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
gpt4all - gpt4all: run open-source LLMs anywhere
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
StableLM - StableLM: Stability AI Language Models
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
duckdb-wasm - WebAssembly version of DuckDB