Our great sponsors
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
sd-webui-lobe-theme
🅰️ Lobe theme - The modern theme for stable diffusion webui, exquisite interface design, highly customizable UI, and efficiency boosting features.
-
InvokeAI
InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
I'm trying this and it sucks after an hour of playing around with it.
I have not tried it myself yet, but I have put it on my to-do list for the near future, alongside the already mentioned GPT4all, but another project I came across on reddit is Serge: https://github.com/nsarrazin/serge
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
InvokaAI is an alternative to AUTOMATIC1111's stable-diffusion-webui as a front-end for Stable Diffusion and both should be able to run on a RTX4000. The base model aren't the easiest to get the best results, but you will find many alternatives models on https://civitai.com/ that can all be used with the webui.
InvokaAI is an alternative to AUTOMATIC1111's stable-diffusion-webui as a front-end for Stable Diffusion and both should be able to run on a RTX4000. The base model aren't the easiest to get the best results, but you will find many alternatives models on https://civitai.com/ that can all be used with the webui.
InvokaAI is an alternative to AUTOMATIC1111's stable-diffusion-webui as a front-end for Stable Diffusion and both should be able to run on a RTX4000. The base model aren't the easiest to get the best results, but you will find many alternatives models on https://civitai.com/ that can all be used with the webui.
I believe open-assistant is able to be ran locally, but it’s currently still in an early phase. Probably best to wait a month or two more for it to get better.
Yep, there are a lot of LLaMa models available, some are very good, but of course require more resources. Many of them are capable of taking on GPT-4. Stanford's Alpaca is the one I've mostly seen talked about, but I'm not sure if it is necessarily the best option.
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen