Rust llamacpp Projects
-
llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
You can practice your Rust skills by writing performant and/or gluey extensions for higher-level language such as NodeJS (checkout napi-rs) and Python or complementing JS in the browser if you target Webassembly.
For instance, checkout Llama-node https://github.com/Atome-FE/llama-node for an involved Rust-based NodeJS extension. Python has PyO3, a Rust-Python extension toolset: https://github.com/PyO3/pyo3.
They can help you leverage your Rust for writing cool new stuff.
Project mention: Go, Python, Rust, and production AI applications | news.ycombinator.com | 2024-03-12I switched from python to rust for my AI stuff. Honestly, I don't care about the things people say rust is used for. I like it because the package manager, testing, and typings being built into the ecosystem by default makes it so easy to build. VS Python where it all can be done, but you need to then maintain all of those separate tools. The overhead of writing Rust is less than the overhead of dealing with the Python ecosystem. And then you have all the benefits of Rust everyone mentions more often... one other thing no one mentions is the feedback loop between a strongly typed language and copilots ability to more accurately generate code.
That being said, there is a real shortage of Rust software for Rust only projects. I ended up writing a wrapper for Llama.cpp and open ai API [0] because I needed it and couldn't find anything out there. Eventually, I do intend to implement Hugging Face's Candle library [1] (A rust version of Torch). There is something appealing about doing everything in a single lang especially as the monopoly of CUDA inevitably gets chipped away.
[0] https://github.com/ShelbyJenkins/llm_client
Index
Project | Stars | |
---|---|---|
1 | llama-node | 849 |
2 | llm_client | 20 |
Sponsored