burn
llama2.rs
burn | llama2.rs | |
---|---|---|
9 | 3 | |
7,074 | 982 | |
5.1% | - | |
9.8 | 8.9 | |
1 day ago | 5 months ago | |
Rust | Rust | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
burn
-
3 years of fulltime Rust game development, and why we're leaving Rust behind
You can use libtorch directly via `tch-rs`, and at present I'm porting over to Burn (see https://burn.dev) which appears incredibly promising. My impression is it's in a good place, if of course not close to the ecosystem of Python/C++. At very least I've gotten my nn models training and running without too much difficulty. (I'm moving to Burn for the thread safety - their `Tensor` impl is `Sync` - libtorch doesn't have such a guarantee.)
Burn has Candle as one of its backends, which I understand is also quite popular.
- Burn: Deep Learning Framework built using Rust
-
Transitioning From PyTorch to Burn
[package] name = "resnet_burn" version = "0.1.0" edition = "2021" [dependencies] burn = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11", features = ["ndarray"] } burn-import = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11" } image = { version = "0.24.7", features = ["png", "jpeg"] }
- Burn Deep Learning Framework Release 0.12.0 Improved API and PyTorch Integration
-
Supercharge Web AI Model Testing: WebGPU, WebGL, and Headless Chrome
Great!
For Burn project, we have WebGPU example and I was looking into how we could add automated tests in the browser. Now it seems possible.
Here is the image classification example if you'd like to check out:
https://github.com/tracel-ai/burn/tree/main/examples/image-c...
-
Burn Deep Learning Framework 0.11.0 Released: Just-in-Time Automatic Kernel Fusion & Founding Announcement
Full Release Note: https://github.com/tracel-ai/burn/releases/tag/v0.11.0
- Burn Deep Learning Framework v0.11.0 Released: Just-in-Time Kernel Fusion
- Burn – comprehensive dynamic Deep Learning Framework built using Rust
- Burn: Deep Learning Framework in Rust
llama2.rs
-
Ask HN: Cheapest hardware to run Llama 2 70B
This code runs Llama2 quantized and unquantized in a roughly minimal way: https://github.com/srush/llama2.rs (though extracting the quantized 70B weights takes a lot of RAM). I'm running the 13B quantized model on ~10-11GB of CPU memory.
-
Candle: Torch Replacement in Rust
Nowhere near as neat as candle or ggml, but just released a 4-bit rust llama2 implementation with simd. Runs pretty fast.
https://github.com/srush/llama2.rs/
- Llama2.rs: One-file Rust implementation of Llama2
What are some alternatives?
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
candle - Minimalist ML framework for Rust
euclid - Geometry primitives (basic linear algebra) for Rust
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
tch-rs - Rust bindings for the C++ api of PyTorch.
llama.cpp - LLM inference in C/C++
rust-mlops-template - A work in progress to build out solutions in Rust for MLOPs
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
corgi - A neural network, and tensor dynamic automatic differentiation implementation for Rust.
syntaxdot - Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.