-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Must have been a while ago when you last checked since it got added with Release 1.7.0 three weeks ago. With simple-proxy-for-tavern it supported streaming for even longer, and I still prefer to use it with that because it streams character by character instead of token by token which feels smoother.
Must have been a while ago when you last checked since it got added with Release 1.7.0 three weeks ago. With simple-proxy-for-tavern it supported streaming for even longer, and I still prefer to use it with that because it streams character by character instead of token by token which feels smoother.
anyone know what has the most models supported & fastest web ui? or atleast what everyone is using. Ive seen https://github.com/oobabooga/text-generation-webui and https://github.com/ParisNeo/lollms-webui.
anyone know what has the most models supported & fastest web ui? or atleast what everyone is using. Ive seen https://github.com/oobabooga/text-generation-webui and https://github.com/ParisNeo/lollms-webui.
oobazz + SillyTavern
I couldn't really tell you on that. I only run Linux and I've never tried to run it on Windows. I just get the source from the github and build it from there. I didn't even know there were pre-built binaries distributed.
https://gpt4all.io/ is what I have been using and it is solid -- it's a locally installed app not a webUI though -- but same idea.
Other than Ooba, this is my fav (and works with a TON of model architectures) -> https://github.com/shinomakoi/magi_llm_gui
Related posts
-
Group chats vs online defined characters, token efficiency question
-
SillyTavern 1.11.0 has been released
-
Is possible to run local voice chat agent? If yes what GPU do i Need with 500β¬ budget?
-
SillyTavern 1.10.10 has been released
-
πΊπ¦ββ¬ LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)