burn
Enzyme
burn | Enzyme | |
---|---|---|
9 | 16 | |
7,074 | 1,159 | |
5.1% | 1.6% | |
9.8 | 9.7 | |
4 days ago | 3 days ago | |
Rust | LLVM | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
burn
-
3 years of fulltime Rust game development, and why we're leaving Rust behind
You can use libtorch directly via `tch-rs`, and at present I'm porting over to Burn (see https://burn.dev) which appears incredibly promising. My impression is it's in a good place, if of course not close to the ecosystem of Python/C++. At very least I've gotten my nn models training and running without too much difficulty. (I'm moving to Burn for the thread safety - their `Tensor` impl is `Sync` - libtorch doesn't have such a guarantee.)
Burn has Candle as one of its backends, which I understand is also quite popular.
- Burn: Deep Learning Framework built using Rust
-
Transitioning From PyTorch to Burn
[package] name = "resnet_burn" version = "0.1.0" edition = "2021" [dependencies] burn = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11", features = ["ndarray"] } burn-import = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11" } image = { version = "0.24.7", features = ["png", "jpeg"] }
- Burn Deep Learning Framework Release 0.12.0 Improved API and PyTorch Integration
-
Supercharge Web AI Model Testing: WebGPU, WebGL, and Headless Chrome
Great!
For Burn project, we have WebGPU example and I was looking into how we could add automated tests in the browser. Now it seems possible.
Here is the image classification example if you'd like to check out:
https://github.com/tracel-ai/burn/tree/main/examples/image-c...
-
Burn Deep Learning Framework 0.11.0 Released: Just-in-Time Automatic Kernel Fusion & Founding Announcement
Full Release Note: https://github.com/tracel-ai/burn/releases/tag/v0.11.0
- Burn Deep Learning Framework v0.11.0 Released: Just-in-Time Kernel Fusion
- Burn – comprehensive dynamic Deep Learning Framework built using Rust
- Burn: Deep Learning Framework in Rust
Enzyme
-
Show HN: Curve Fitting Bezier Curves in WASM with Enzyme Ad
Automatic differentiation is done using https://enzyme.mit.edu/
-
Ask HN: What Happened to TensorFlow Swift
lattner left google and was the primary reason they chose swift, so they lost interest.
if you're asking from an ML perspective, i believe the original motivation was to incorporate automatic differentiation in the swift compiler. i believe enzyme is the spiritual successor.
https://github.com/EnzymeAD/Enzyme
-
Show HN: Port of OpenAI's Whisper model in C/C++
https://ispc.github.io/ispc.html
For the auto-differentiation when I need performance or memory, I currently use tapenade ( http://tapenade.inria.fr:8080/tapenade/index.jsp ) and/or manually written gradient when I need to fuse some kernel, but Enzyme ( https://enzyme.mit.edu/ ) is also very promising.
MPI for parallelization across machines.
-
Do you consider making a physics engine (for RL) worth it?
For autodiff, we are currently working again on publishing a new Enzyme (https://enzyme.mit.edu) Frontend for Rust which can also handle pure Rust types, first version should be done in ~ a week.
-
What is a really cool thing you would want to write in Rust but don't have enough time, energy or bravery for?
Have you taken a look at enzymeAD? There is a group porting it to rust.
-
The Julia language has a number of correctness flaws
Enzyme dev here, so take everything I say as being a bit biased:
While, by design Enzyme is able to run very fast by operating within the compiler (see https://proceedings.neurips.cc/paper/2020/file/9332c513ef44b... for details) -- it aggressively prioritizes correctness. Of course that doesn't mean that there aren't bugs (we're only human and its a large codebase [https://github.com/EnzymeAD/Enzyme], especially if you're trying out newly-added features).
Notably, this is where the current rough edges for Julia users are -- Enzyme will throw an error saying it couldn't prove correctness, rather than running (there is a flag for "making a best guess, but that's off by default"). The exception to this is garbage collection, for which you can either run a static analysis, or stick to the "officially supported" subset of Julia that Enzyme specifies.
Incidentally, this is also where being a cross-language tool is really nice -- namely we can see edge cases/bug reports from any LLVM-based language (C/C++, Fortran, Swift, Rust, Python, Julia, etc). So far the biggest code we've handled (and verified correctness for) was O(1million) lines of LLVM from some C++ template hell.
I will also add that while I absolutely love (and will do everything I can to support) Enzyme being used throughout arbitrary Julia code: in addition to exposing a nice user-facing interface for custom rules in the Enzyme Julia bindings like Chris mentioned, some Julia-specific features (such as full garbage collection support) also need handling in Enzyme.jl, before Enzyme can be considered an "all Julia AD" framework. We are of course working on all of these things (and the more the merrier), but there's only a finite amount of time in the day. [^]
[^] Incidentally, this is in contrast to say C++/Fortran/Swift/etc, where Enzyme has much closer to whole-language coverage than Julia -- this isn't anything against GC/Julia/etc, but we just have things on our todo list.
-
Jax vs. Julia (Vs PyTorch)
Idk, Enzyme is pretty next gen, all the way down to LLVM code.
https://github.com/EnzymeAD/Enzyme
-
What's everyone working on this week (7/2022)?
I'm working on merging my build-tool for (oxide)-enzyme into Enzyme itself. Also looking into improving the documentation.
- Wsmoses/Enzyme: High-performance automatic differentiation of LLVM
-
Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
that seems one of the points of enzyme[1], which was mentioned in the article.
[1] - https://enzyme.mit.edu/
being able in effect do interprocedural cross language analysis seems awesome.
What are some alternatives?
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
Zygote.jl - 21st century AD
candle - Minimalist ML framework for Rust
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
tch-rs - Rust bindings for the C++ api of PyTorch.
Lux.jl - Explicitly Parameterized Neural Networks in Julia
rust-mlops-template - A work in progress to build out solutions in Rust for MLOPs
linfa - A Rust machine learning framework.
llama2.rs - A fast llama2 decoder in pure Rust.
faust - Functional programming language for signal processing and sound synthesis