Our great sponsors
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
- SonarLint - Clean code begins in your IDE with SonarLint
- Revelo Payroll - Free Global Payroll designed for tech teams
- Onboard AI - Learn any GitHub repo in 59 seconds
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others
-
you could run motorhead on docker https://github.com/getmetal/motorhead
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
Maybe llama.cpp is what you might need. It doesn't even need a GPU and can run on mobile device.
-
You also need a LLM to do this. Please check this out to pick one up from the llama family. Other works like llama.onnx, alpaca-native and llama model on hugging face are also worth checking.