-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
web-llm
Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
I have no idea, but the first commit in that repo is this one: https://github.com/nomic-ai/gpt4all/commit/4f5c513e710b73ad5...
“Three weeks ago”
So uh… that’s total BS, unless there’s more history to it than is obvious.
You're being cynical for no good reason. Everything we've done has been put out under Apache 2 or MIT license. We take the 4 all seriously. See here for proof: https://github.com/nomic-ai/gpt4all-chat/issues/62#issue-167...
This plus the the vicuna models is probably as close as you can get in the near term.
https://github.com/oobabooga/text-generation-webui
You can run it on your browser with webgpu https://mlc.ai/web-llm/
Hi I was also looking into this and I am now using: https://github.com/abetlen/llama-cpp-python It tries to be compatible with openAI API. I managed to run AutoGPT using it (however context window is too small to be useful and even if I set it to 2048 (max) I had to set AutoGPT context maximum as 1024 for it to work)