Our great sponsors
-
equinox
Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Roslyn
The Roslyn .NET compiler provides C# and Visual Basic languages with rich code analysis APIs.
On the assumption that you're doing something neural network related: have a look at the examples section for one of its deep learning libraries. (e.g. this example trains an RNN on a toy classification problem)
Fwiw, it’s not like Pytorch’s design prevents function transformations from being implemented. See functorch for an example of grad/vmap function transforms: https://github.com/pytorch/functorch
So not just that paper, but also our follow-up papers on the same topic: Neural SDEs as Infinite-Dimensional GANs Efficient and Accurate Gradients for Neural SDEs are in fact implemented in PyTorch, specifically the torchsde library. (Disclaimer: of which I am a developer.)
The one thing I really *really* wish got more attention was named tensors and the tensor type system. Tensor misalignment errors are a constant source of silently-failing bugs. While 3rd party libraries have attempted to fill this gap, it really needs better native support. In particular it seems like bad form to me for programmers to have to remember the specific alignment and broadcasting rules, and then have to apply them to an often poorly documented order of tensor indices. I'd really like to see something like tsalib's warp operator made part of the main library and generalized to arbitrary function application, like a named-tensor version of fold. But preferably using notation closer to that of torchtyping.
The one thing I really *really* wish got more attention was named tensors and the tensor type system. Tensor misalignment errors are a constant source of silently-failing bugs. While 3rd party libraries have attempted to fill this gap, it really needs better native support. In particular it seems like bad form to me for programmers to have to remember the specific alignment and broadcasting rules, and then have to apply them to an often poorly documented order of tensor indices. I'd really like to see something like tsalib's warp operator made part of the main library and generalized to arbitrary function application, like a named-tensor version of fold. But preferably using notation closer to that of torchtyping.
Probably the most well-developed options I know for this at the moment are Dex and Hasktorch.
Probably the most well-developed options I know for this at the moment are Dex and Hasktorch.
Thanks for the references, interesting projects (including Equinox). I know that C# is not THE language for ML research and it also lacks variadic generics (and const generics), but they introduced recently something called Source Generators. You can basically generate some C# code based on existing C# code (syntax trees and stuff) and it hooks into static analysis phase. It is integrated with IDE (JetBrains and Visual Studio) and you can define your own warning or error messages. Feels pretty native. Not sure how it compares to Rust macro generators and if there are some roadblocks along the way, but that may be an option to ensure shape type safety at compile time for nd-arrays.