-
I have a question: as the maintainer of [neuronika](https://github.com/neuronika/neuronika), a crate that offers dynamic neural network and auto-differentiation with dynamic graphs, I'm looking at a future possible feature for such framework consisting in the possibility of compiling models, getting thus rid of the "dynamic" part, which is not always needed. This would speed the inference and training times quite a bit.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
rust-ndarray
ndarray: an N-dimensional array with array views, multidimensional slicing, and efficient operations
I don't think any of the major ML projects have GPU acceleration because ndarray doesn't support it.
-
Would you be interested in collaborating and making it part of Rust CUDA? cuDNN is my next target after cuBLAS but it is a lot of work for one person. I would like to keep all library wrappers inside of one org/repo so there is no ambiguity about what will likely be the most complete and/or most maintained.
Related posts
-
Helper crate for working with image data of varying type?
-
What is the most efficient way to study Rust for scientific computing applications?
-
Status and Future of ndarray?
-
How does explicit unrolling differ from iterating through elements one-by-one? (ndarray example)
-
Any efficient way of splitting vector?