chat-llama-discord-bot
A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp. (by xNul)
yal-discord-bot
Yet Another LLaMA/ALPACA Discord Bot (by AmericanPresidentJimmyCarter)
Our great sponsors
chat-llama-discord-bot | yal-discord-bot | |
---|---|---|
1 | 5 | |
113 | 72 | |
- | - | |
6.5 | 6.5 | |
11 months ago | about 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chat-llama-discord-bot
Posts with mentions or reviews of chat-llama-discord-bot.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-09.
yal-discord-bot
Posts with mentions or reviews of yal-discord-bot.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-22.
- I wrote a Discord bot to host your own ChatGPT-style chatbot with ALPACA finetuned LLaMA weights on consumer GPUs. 13b fits in a 3080, 30b fits in a 3090. No censorship, ask anything you want!
- [P] Discord Chatbot for LLaMA 4-bit quantized that runs 13b in <9 GiB VRAM
- Discord Chatbot for LLaMA 4-bit quantized that runs 13b in <9 GiB VRAM (r/MachineLearning)
What are some alternatives?
When comparing chat-llama-discord-bot and yal-discord-bot you can also consider the following projects:
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
llama.cpp - LLM inference in C/C++
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
signal-aichat - An AI chatbot for Signal powered by Google Bard, Bing Chat, ChatGPT, HuggingChat, and llama.cpp
pifs - πfs - the data-free filesystem!
vanilla-llama - Plain pytorch implementation of LLaMA
alpaca-lora - Instruct-tune LLaMA on consumer hardware
fantasy_football_chat_bot - GroupMe Discord and Slack Chatbot for ESPN Fantasy Football
GPTQ-for-LLaMa
LLaMA_MPS - Run LLaMA inference on Apple Silicon GPUs.
chat-llama-discord-bot vs mPLUG-Owl
yal-discord-bot vs llama.cpp
chat-llama-discord-bot vs text-generation-webui
yal-discord-bot vs signal-aichat
chat-llama-discord-bot vs signal-aichat
yal-discord-bot vs pifs
chat-llama-discord-bot vs vanilla-llama
yal-discord-bot vs alpaca-lora
chat-llama-discord-bot vs fantasy_football_chat_bot
yal-discord-bot vs GPTQ-for-LLaMa
chat-llama-discord-bot vs LLaMA_MPS
yal-discord-bot vs text-generation-webui