gpt-discord-bot
Example Discord bot written in Python that uses the completions API to have conversations with the `text-davinci-003` model, and the moderations API to filter the messages. (by openai)
llama-cpp-python
Python bindings for llama.cpp (by abetlen)
Our great sponsors
gpt-discord-bot | llama-cpp-python | |
---|---|---|
7 | 54 | |
1,709 | 6,378 | |
2.4% | - | |
4.2 | 9.9 | |
15 days ago | 4 days ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-discord-bot
Posts with mentions or reviews of gpt-discord-bot.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-15.
-
Discord bot trained on custom knowledge
I’ve set up an initial bot using the gpt-discord-bot package. I’m wondering how to train it with more data like an entire PDF for example instead of editing the instructions in config.yaml. Also, how would I go about hosting this so it can run constantly, do I just run in on a standard Linux server with certain firewall settings?
-
GPT Discord Bot personality, persistence and hosting
Hey guys, I'm new to AI chat bots but eager to learn. I'm playing around with [gpt-discord-bot](https://github.com/openai/gpt-discord-bot) and have a few questions to get me up and running if someone has time:
-
Most efficient way to set up API serving of custom LLMs?
And here's a Discord bot that currently works with it that you may be able to learn from: https://github.com/openai/gpt-discord-bot
-
I turned ChatGPT into a Discord bot with a voice and may have summoned AI Lucifer
Here's the page that tells you how to do it but you'll need some programming knowledge in python to get it to work. it's not just something you can invite to your server. https://github.com/openai/gpt-discord-bot
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
Using Davinci 003, can we make it always pretend it’s someone else?
This one, right here >> https://github.com/openai/gpt-discord-bot Follow the instruction and you'll get it working.
-
Paid $42 for ChatGPT Pro Yesterday and “getting at capacity error”
Go to the official OpenAI Discord - https://discord.gg/openai and then go to #gpt-discord-bot and that'll send you to https://github.com/openai/gpt-discord-bot to get the code. I'm running the code on a RaspberryPi but originally I ran it on my MacBook. Super easy to setup. Just needs an API key from OpenAI you can get here: https://beta.openai.com/account/api-keys once you give them a credit card for billing https://beta.openai.com/account/billing/overview and you can set limits on what they charge you. It's honestly super cheap. For Discord you just need a server you own to invite the bot to and of course Discord lets you setup a server for free.
llama-cpp-python
Posts with mentions or reviews of llama-cpp-python.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-11.
- FLaNK AI for 11 March 2024
-
OpenAI: Memory and New Controls for ChatGPT
I'll share the core bit that took a while to figure out the right format, my main script is a hot mess using embeddings with SentenceTransformer, so I won't share that yet. E.g: last night I did a PR for llama-cpp-python that shows how Phi might be used with JSON only for the author to write almost exactly the same code at pretty much the same time. https://github.com/abetlen/llama-cpp-python/pull/1184
-
TinyLlama LLM: A Step-by-Step Guide to Implementing the 1.1B Model on Google Colab
Python Bindings for llama.cpp
- Mistral-8x7B-Chat
-
Running Mistral LLM on Apple Silicon Using Apple's MLX Framework Is Much Faster
If the model could be made to work with llama.cpp, then https://github.com/abetlen/llama-cpp-python might be more compact. llama.cpp only supports a limited list of model types though.
- Run ChatGPT-like LLMs on your laptop in 3 lines of code
-
Code Llama, a state-of-the-art large language model for coding
https://github.com/abetlen/llama-cpp-python has a web server mode that replicates openai's API iirc and the readme shows it has docker builds already.
-
Meta: Code Llama, an AI Tool for Coding
LocalAI https://localai.io/ and LMStudio https://lmstudio.ai/ both have fairly complete OpenAI compatibility layers. llama-cpp-python has a FastAPI server as well: https://github.com/abetlen/llama-cpp-python/blob/main/llama_... (as of this moment it hasn't merged GGUF update yet though)
-
First steps with llama
I went with Python, llama-cpp-python, since my goal is just to get a small project up and running locally.
-
Show HN: Khoj – Chat Offline with Your Second Brain Using Llama 2
I see you’re using gpt4all; do you have a supported way to change the model being used for local inference?
A number of apps that are designed for OpenAI’s completion/chat APIs can simply point to the endpoints served by llama-cpp-python [0], and function in (largely) the same way, while supporting the various models and quants supported by llama.cpp. That would allow folks to run larger models on the hardware of their choice (including Apple Silicon with Metal acceleration) or using other proxies like openrouter.io.
[0]: https://github.com/abetlen/llama-cpp-python