gpt-discord-bot
alpaca.cpp
DISCONTINUED
Our great sponsors
gpt-discord-bot | alpaca.cpp | |
---|---|---|
7 | 93 | |
1,690 | 9,878 | |
2.7% | - | |
4.7 | 9.4 | |
about 1 month ago | 11 months ago | |
Python | C | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-discord-bot
-
Most efficient way to set up API serving of custom LLMs?
And here's a Discord bot that currently works with it that you may be able to learn from: https://github.com/openai/gpt-discord-bot
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
Paid $42 for ChatGPT Pro Yesterday and “getting at capacity error”
Go to the official OpenAI Discord - https://discord.gg/openai and then go to #gpt-discord-bot and that'll send you to https://github.com/openai/gpt-discord-bot to get the code. I'm running the code on a RaspberryPi but originally I ran it on my MacBook. Super easy to setup. Just needs an API key from OpenAI you can get here: https://beta.openai.com/account/api-keys once you give them a credit card for billing https://beta.openai.com/account/billing/overview and you can set limits on what they charge you. It's honestly super cheap. For Discord you just need a server you own to invite the bot to and of course Discord lets you setup a server for free.
alpaca.cpp
-
Hollywood’s Screenwriters Are Right to Fear AI
Alpaca
-
Alpaca Vs. Final Jeopardy
Models were run using CPU only and using alpaca.cpp
The model I found was in 8 parts. The alpaca.cpp chat client (chat.cpp) needs to be modified to run the 8 part model, documented here: https://github.com/antimatter15/alpaca.cpp/issues/149
-
LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
try the instructions on this github repo https://github.com/antimatter15/alpaca.cpp, its not the best one but I was able to run this model on my linux machine with 16GB memory, I think its a good starting point.
-
ChatGPT Reignited My Passion For Coding
Ye, atm. toying with alpaca 7B/13B in a local install.
- Benchmarks for LLMs on Consumer Hardware
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
All versions of ggml ALPACA models (legacy format from alpaca.cpp, and also all the newer ggml alpacas on huggingface)
- What's the difference between oobabooga vs llama.cpp vs fastgpt
-
Fun with Alpaca 30B
Ah, so it is running 30b, but the name is just a semantic issue? Also, I’m running it using the antimatter download if that matters (https://github.com/antimatter15/alpaca.cpp/releases/tag/81bd894)
I am indeed running it through powershell, which in my opinion is a far better experience than trying to run it through a web ui like Dalai. Not running through llama.cpp, but alpaca.cpp on: https://github.com/antimatter15/alpaca.cpp
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
llama.cpp - LLM inference in C/C++
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
ggml - Tensor library for machine learning
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
stable-diffusion-webui - Stable Diffusion web UI
dalai - The simplest way to run LLaMA on your local machine