gpt-discord-bot
gpt4all
Our great sponsors
gpt-discord-bot | gpt4all | |
---|---|---|
7 | 139 | |
1,690 | 62,932 | |
2.7% | 3.8% | |
4.7 | 9.8 | |
about 1 month ago | 1 day ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-discord-bot
-
Most efficient way to set up API serving of custom LLMs?
And here's a Discord bot that currently works with it that you may be able to learn from: https://github.com/openai/gpt-discord-bot
- LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
-
Paid $42 for ChatGPT Pro Yesterday and “getting at capacity error”
Go to the official OpenAI Discord - https://discord.gg/openai and then go to #gpt-discord-bot and that'll send you to https://github.com/openai/gpt-discord-bot to get the code. I'm running the code on a RaspberryPi but originally I ran it on my MacBook. Super easy to setup. Just needs an API key from OpenAI you can get here: https://beta.openai.com/account/api-keys once you give them a credit card for billing https://beta.openai.com/account/billing/overview and you can set limits on what they charge you. It's honestly super cheap. For Discord you just need a server you own to invite the bot to and of course Discord lets you setup a server for free.
gpt4all
- Show HN: I made an app to use local AI as daily driver
-
Ollama Python and JavaScript Libraries
I don’t know if Ollama can do this but https://gpt4all.io/ can.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Gpt4all is a local desktop app with a Python API that can be trained on your documents: https://gpt4all.io/
-
WyGPT: Minimal mature GPT model in C++
The readme page is cryptic. What does 'mature' mean in this context? What is the sample text a continuation of?
Hving a gif the thing in use would be great, similar to the gpt4all readme page. (https://github.com/nomic-ai/gpt4all)
-
LibreChat
Check https://github.com/nomic-ai/gpt4all instead.
Totally misleading.
What you actually are looking for is gpt4all:
-
OpenAI Negotiations to Reinstate Altman Hit Snag over Board Role
"I ran performance tests on two systems, here's the results of system 1, and heres the results of system 2. Summarize the results, and build a markdown table containing x,y,z rows."
"extract the reusable functions out of this bash script"
"write me a cfssl command to generate a intermediate CA"
"What is the regex for _____"
"Here are my accomplishments over the last 6 months, summarize them into a 1 page performance report."
etc etc etc
If you're not using GPT4 or some LLM as part of your daily flow you're working too hard.
Get GPT4All (https://gpt4all.io), log into OpenAI, drop $20 on your account, get a API key, and start using GPT4.
- GPT4All: An ecosystem of open-source on-edge large language models - by Nomic AI
-
Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
Agreed.
Gpt4all[1] offers a similar 'simple setup' but with application exe downloads, but is arguably more like open core because the gpt4all makers (nomic?) want to sell you the vector database addon stuff on top.
[1]https://github.com/nomic-ai/gpt4all
I like this one because it feels more private / is not being pushed by a company that can do a rug pull. This can still do a rug pull, but it would be harder to do.
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/
This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.
I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.
For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama
For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/
I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/
What are some alternatives?
llama.cpp - LLM inference in C/C++
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
ollama - Get up and running with Llama 2, Mistral, Gemma, and other large language models.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
gpt4free - The official gpt4free repository | various collection of powerful language models
dolly - Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform