-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
I've been wanting to run LLM's locally and it looks like there is a huge amount of interest from others as well to finally run and create our own chat style models.
I came across https://github.com/jmorganca/ollama in a wonderful HN submission a few days ago. I do have a Macbook Pro M1 that was top of the line in 2022, the only problem is I have Debian on it as I use Linux.
Could someone point me in the right direction for a beginner like my self on how to run for example Wizard Vicuna Uncensored locally on Linux? I would very much appreciate it, thanks for reading.
https://github.com/mlc-ai/mlc-llm will let you compile llama models for various architectures and with quantization for running locally. I successfully deployed vicuna v1.5 7b to the apple app store recently.
I've seen this mentioned in some guides I have read (I believe this is the one you are referencing: https://github.com/oobabooga/text-generation-webui) and will definitely look into it, thanks!