-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I'll plug the project I've started work on along the same lines - a simplified CPU focused embedding model (right now with distillbert) that's coded as a single file with no dependencies and no abstraction. https://github.com/rbitr/ferrite
Ollama seems to be using a lot of the same:
https://github.com/jmorganca/ollama/tree/main/llm
including submodules for GGML and GGUF from llama.cpp
I'm mostly a python programmer, but I find a lot of the ML frameworks are overkill for what they actually do, especially for inference. Fortran is pretty close to numpy - it handles arrays natively including slicing and matmul instinsics, you don't have to worry about memory etc. But it compiles into something fast and lightweight much more easily than python. It's nothing you couldn't do in C but I think Fortran is better suited for linear algebra.
See also https://github.com/rbitr/llama2.f90 which is basically the same thing but for running llama models and has 16-bit and 4-bit options and a lot more optimization.