Our great sponsors
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
chat.petals.dev
💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client
This sounds (very narrowly) similar to the Enigma network, a blockchain-based technology that can be used for fully encrypted multi-party computation (MPC). It was one of the earlier blockchain projects that actually had an interesting use case and technology in this quite "overhyped" space. They rebranded to the Secret network [0] a few years back and somehow I don't find this use case/promise back nowadays...website screams all of the Web3 BS buzzwords it seems :(
For LLMs, the closest thing that comes to mind is KoboldAI[1]. The community isn't as big as Stable Diffusion's, but the Discord server is pretty active. I'm an active member of the community who likes to spread it around, haha.
Like Stable Diffusion, it's a Web UI (vaguely reminiscent of NovelAI's) that uses a backend (in this case, Huggingface Transformers). You can use different model architectures, as early as GPT-2 to the newer ones like BigScience's BLOOM, Meta's OPT, and EleutherAI's GPT-Neo and Pythia models, just as long as it was implemented in Huggingface.
They have official support for Google Colab[2][3]; most of the models shown are finetunes on novels (Janeway), choose-your-own-adventures (Nerys / Skein / Adventure), or erotic literature (Erebus / Shinen). You can use the models listed or provide a Huggingface URL.
[1] - https://github.com/koboldai/koboldai-client (source code)