-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
https://github.com/LAION-AI/Open-Assistant looks really interesting.
As others have pointed out, running a truly large language model like GPT-3 isn't (yet) feasible on your own hardware - you need a LOT of powerful GPUs racked up in order to run inference.
https://github.com/bigscience-workshop/petals is a really interesting project here: it works a bit like bittorrent, allowing you to join a larger network of people who share time on their GPUs, enabling execution of models that can't fit on a single member's hardware.
https://github.com/LAION-AI/Open-Assistant looks really interesting.
As others have pointed out, running a truly large language model like GPT-3 isn't (yet) feasible on your own hardware - you need a LOT of powerful GPUs racked up in order to run inference.
https://github.com/bigscience-workshop/petals is a really interesting project here: it works a bit like bittorrent, allowing you to join a larger network of people who share time on their GPUs, enabling execution of models that can't fit on a single member's hardware.
If you want a simple command line chat bot, I made this simple example: https://github.com/atomic14/command_line_chatbot