-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I'm not thrilled about https://github.com/GoogleCloudPlatform/localllm/blob/main/ll... calling their Python package "llm" and installing "llm" as a CLI command, when my similar https://llm.datasette.io/ project has that namespace reserved on PyPI already: https://pypi.org/project/llm/
Slightly off topic, here is the best local llama.cpp wrapper I've run into:
https://github.com/Mozilla-Ocho/llamafile
You can download any .gguf model (not just the ones in their examples) and run it locally (as long as you have the ram for it). I was running 7B models with ease on an old FX8350 and now 13B models on a 5600X (32GB RAM on both machines).
This wrapper spins up a local web server that runs a simple web frontend to use immediately with no code, but also exposes an OpenAI compatible API for dev work and alt frontends (like SillyTavern).
I'm not thrilled about https://github.com/GoogleCloudPlatform/localllm/blob/main/ll... calling their Python package "llm" and installing "llm" as a CLI command, when my similar https://llm.datasette.io/ project has that namespace reserved on PyPI already: https://pypi.org/project/llm/
This is a Trojan horse. As a former Google employee working on their OSS stack, this honestly feels a bit off brand.
Here's a whole list of other OSS AI frameworks you can instead: https://github.com/janhq/awesome-local-ai
Disclaimer: I'm one of the core devs on Jan.