arewefastyet
tch-rs
Our great sponsors
arewefastyet | tch-rs | |
---|---|---|
9 | 37 | |
19 | 3,748 | |
- | - | |
0.0 | 7.7 | |
about 1 year ago | 24 days ago | |
Rust | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arewefastyet
-
Rust Support in the Linux Kernel
That page averages all the builds across different code bases. It doesn’t specify which version/tag of which code base, nor does it talk about the hardware.
https://arewefastyet.pages.dev/ - This page tracks compile times across some common crates over all supported compiler versions, with different hardware (2, 4, 8, 16 cores). This used to be https://arewefastyet.rs but the domain expired.
-
Rust programming language: We want to take it into the mainstream, says Facebook
For what it's worth, Rust compile times have improved by 33-50% in the last two years, depending on the crate, compiler mode and number of cores - https://arewefastyet.rs. Also, debug builds will get approximately 50% faster when the cranelift backend lands.
You can check incremental compile times on http://arewefastyet.rs. Choose one compile mode (Debug OR Release, preferably Debug), one hardware config (4 cores let's say) and both profile modes (Clean, Incremental).
-
Reducing Rust Incremental Compilation Times on macOS by 70%
Compile times in rustc have been steadily improving with time, as shown here - https://arewefastyet.rs.
Every release doesn't make every workload faster, but over a long time horizon, the effect is clear. Rust 1.34 was released in April 2019 and since then many crates have become 33-50% faster to compile, depending on the hardware and the compiler mode (clean/incremental, check/debug/release).
Interestingly, the speedup mentioned in OP won't show up in these charts because that's a change on macOS and these benchmarks were recorded on Linux.
What is expected to be a gamechanger is the release of cranelift in 2021 or 2022. It's an alternate debug backend that promises much faster debug builds.
-
Announcing Rust 1.50.0
Thanks for your work on arewefastyet.rs, I was about to post a link to it haha
-
[ELI5]: How to write a simple custom Serde de/serializer?
I implemented something similar. Deserialising a comma separated strings into a struct - example. Hope that helps!
tch-rs
-
Llama2.rs: One-file Rust implementation of Llama2
I wanted to do something like this but then I would miss on proper CUDA acceleration and lose performance compared to using torchlib.
I wrote a forgettable llama implementation for https://github.com/LaurentMazare/tch-rs (pytorch's torchlib rust binding).
-
Playing Atari Games in OCaml
I first encountered OCaml's PyTorch bindings because apparently they generate a C wrapper around PyTorch's C++ API, and Rust's PyTorch bindings use OCaml's C wrapper. See: https://github.com/LaurentMazare/tch-rs
-
llm: a Rust crate/CLI for CPU inference of LLMs, including LLaMA, GPT-NeoX, GPT-J and more
You could try looking at the min-GPT example of tch-rs. I'd also strongly suggest watching Karpathy's video to understand what's going on.
-
A Rust client library for interacting with Microsoft Airsim https://github.com/Sollimann/airsim-client
Pytorch
- [D] HuggingFace in Julia or Rust ?
- This year I tried solving AoC using Rust, here are my impressions coming from Python!
-
[Help Needed] Deployment of torchscript using rust
I have looked into this a bit and found some crates which help in loading torchscript models called tch-rs
-
Stable Diffusion with Core ML on Apple Silicon
PyTorch has libtorch as its purely native library. There are also Rust bindings for libtorch:
https://github.com/LaurentMazare/tch-rs
I used this in the past to make a transformer-based syntax annotator. Fully in Rust, no Python required:
-
I could use some basic help
The game is in Rust, and so I have been working at using the pytorch Rust bindings, which have an A2C example, so that's what I've been going with. Example here: https://github.com/LaurentMazare/tch-rs/blob/main/examples/reinforcement-learning/a2c.rs
-
Announcing Burn: New Deep Learning framework with CPU & GPU support using the newly stabilized GAT feature
Burn is different: it is built around the Backend trait which encapsulates tensor primitives. Even the reverse mode automatic differentiation is just a backend that wraps another one using the decorator pattern. The goal is to make it very easy to create optimized backends and support different devices and use cases. For now, there are only 3 backends: NdArray (https://github.com/rust-ndarray/ndarray) for a pure rust solution, Tch (https://github.com/LaurentMazare/tch-rs) for an easy access to CUDA and cuDNN optimized operations and the ADBackendDecorator making any backend differentiable. I am now refactoring the internal backend API to make it as easy as possible to plug in new ones.
What are some alternatives?
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
candle - Minimalist ML framework for Rust
cbindgen - A project for generating C bindings from Rust code
bevy - A refreshingly simple data-driven game engine built in Rust
wtpsplit - Code for Where's the Point? Self-Supervised Multilingual Punctuation-Agnostic Sentence Segmentation
veloren - An open world, open source voxel RPG inspired by Dwarf Fortress and Cube World. This repository is a mirror. Please submit all PRs and issues on our GitLab page.
rustlearn - Machine learning crate for Rust
gdnative - Rust bindings for Godot 3
burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals. [Moved to: https://github.com/Tracel-AI/burn]
linfa - A Rust machine learning framework.
tokenizers - 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production